Kenjiro Cho
Internet Initiative Japan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kenjiro Cho.
acm special interest group on data communication | 2006
Kenjiro Cho; Kensuke Fukuda; Hiroshi Esaki; Akira Kato
It has been reported worldwide that peer-to-peer traffic is taking up a significant portion of backbone networks. In particular, it is prominent in Japan because of the high penetration rate of fiber-based broadband access. In this paper, we first report aggregated traffic measurements collected over 21 months from seven ISPs covering 42% of the Japanese backbone traffic. The backbone is dominated by symmetric residential traffic which increased 37%in 2005. We further investigate residential per-customer trafficc in one of the ISPs by comparing DSL and fiber users, heavy-hitters and normal users, and geographic traffic matrices. The results reveal that a small segment of users dictate the overall behavior; 4% of heavy-hitters account for 75% of the inbound volume, and the fiber users account for 86%of the inbound volume. About 63%of the total residential volume is user-to-user traffic. The dominant applications exhibit poor locality and communicate with a wide range and number of peers. The distribution of heavy-hitters is heavy-tailed without a clear boundary between heavy-hitters and normal users, which suggests that users start playing with peer-to-peer applications, become heavy-hitters, and eventually shift from DSL to fiber. We provide conclusive empirical evidence from a large and diverse set of commercial backbone data that the emergence of new attractive applications has drastically affected traffic usage and capacity engineering requirements.
acm special interest group on data communication | 2007
Guillaume Dewaele; Kensuke Fukuda; Pierre Borgnat; Patrice Abry; Kenjiro Cho
A new profile-based anomaly detection and characterization procedure is proposed. It aims at performing prompt and accurate detection of both short-lived and long-lasting low-intensity anomalies, without the recourse of any prior knowledge of the targetted traffic. Key features of the algorithm lie in the joint use of random projection techniques (sketches) and of a multiresolution non Gaussian marginal distribution modeling. The former enables both a reduction in the dimensionality of the data and the measurement of the reference (i.e., normal) traffic behavior, while the latter extracts anomalies at different aggregation levels. This procedure is used to blindly analyze a large-scale packet trace database collected on a trans-Pacific transit link from 2001 to 2006. It can detect and identify a large number of known and unknown anomalies and attacks, whose intensities are low (down to below one percent). Using sketches also makes possible a real-time identification of the source or destination IP addresses associated to the detected anomaly and hence their mitigation.
acm special interest group on data communication | 2005
Kensuke Fukuda; Kenjiro Cho; Hiroshi Esaki
This paper investigates the effects of the rapidly-growing residential broadband traffic on commercial ISP backbone networks. We collected month-long aggregated traffic logs for different traffic groups from seven major ISPs in Japan in order to analyze the macro-level impact of residential broad-band traffic. These traffic groups are carefully selected to be summable, and not to count the same traffic multiple times.Our results show that (1) the aggregated residential broad-band customer traffic in our data exceeds 100Gbps on average. Our data is considered to cover 41% of the total customer traffic in Japan, thus we can estimate that the total residential broadband traffic in Japan is currently about 250Gbps in total. (2) About 70% of the residential broadband traffic is constant all the time. The rest of the traffic has a daily fluctuation pattern with the peak in the evening hours. The behavior of residential broadband traffic deviates considerably from academic or office traffic. (3) The total traffic volume of the residential users is much higher than that of office users, so backbone traffic is dominated by the behavior of the residential user traffic. (4) The traffic volume exchanged through domestic private peering is comparable with the volume exchanged through the major IXes. (5) Within external traffic of ISPs, international traffic is about 23% for inbound and about 17% for outbound. (6) The distribution of the regional broadband traffic is roughly proportional to the regional population.We expect other countries will experience similar traffic patterns as residential broadband access becomes widespread.
conference on emerging network experiment and technology | 2008
Kenjiro Cho; Kensuke Fukuda; Hiroshi Esaki; Akira Kato
It is often argued that rapidly increasing video content along with the penetration of high-speed access is leading to explosive growth in the Internet traffic. Contrary to this popular claim, technically solid reports show only modest traffic growth worldwide. This paper sheds light on the causes of the apparently slow growth trends by analyzing commercial residential traffic in Japan where the fiber access rate is much higher than other countries. We first report that Japanese residential traffic also has modest growth rates using aggregated measurements from six ISPs. Then, we investigate residential per-customer traffic in one ISP by comparing traffic in 2005 and 2008, before and after the advent of YouTube and other similar services. Although at first glance a small segment of peer-to-peer users still dictate the overall volume, they are slightly decreasing in population and volume share. Meanwhile, the rest of the users are steadily moving towards rich media content with increased diversity. Surely, a huge amount of online data and abundant headroom in access capacity can conceivably lead to a massive traffic growth at some point in the future. The observed trends, however, suggest that video content is unlikely to disastrously overflow the Internet, at least not anytime soon.
acm special interest group on data communication | 2004
Kenjiro Cho; Matthew J. Luckie; Bradley Huffaker
One of the major hurdles limiting IPv6 adoption is the existence of poorly managed experimental IPv6 sites that negatively affect the perceived quality of the IPv6 Internet. To assist network operators in improving IPv6 networks, we are exploring methods to identify wide-area IPv6 network problems. Our approach makes use of parallel IPv4 and IPv6 connectivity to dual-stacked nodes.We identify the existence of an IPv6 path problem by comparing IPv6 delay measurements to IPv4 delay measurements. Our test results indicate that the majority of IPv6 paths have delay characteristics comparable to those of IPv4, although a small number of paths exhibit a much larger delay with IPv6. Thus, we hope to improve the quality of the IPv6 Internet by identifying the worst set of problems.Our methodology is simple. We create a list of systems with IPv6 and IPv4 addresses in actual use by monitoring DNS messages. We then measure delay to each address in order to select a few systems per site based on their IPv6:IPv4 response-time ratios. Finally, we run traceroute with Path MTU discovery to the selected systems and then visualize the results for comparative path analysis. This paper presents the tools used to support this study, and the results of our measurements conducted from two locations in Japan and one in Spain.
Proceedings of the Special Workshop on Internet and Disasters | 2011
Kenjiro Cho; Cristel Pelsser; Randy Bush; Youngjoon Won
The Great East Japan Earthquake and Tsunami on March 11, 2011, disrupted a significant part of communications infrastructures both within the country and in connectivity to the rest of the world. Nonetheless, many users, especially in the Tokyo area, reported experiences that voice networks did not work yet the Internet did. At a macro level, the Internet was impressively resilient to the disaster, aside from the areas directly hit by the quake and ensuing tsunami. However, little is known about how the Internet was running during this period. We investigate the impact of the disaster to one major Japanese Internet Service Provider (ISP) by looking at measurements of traffic volumes and routing data from within the ISP, as well as routing data from an external neighbor ISP. Although we can clearly see circuit failures and subsequent repairs within the ISP, surprisingly little disruption was observed from outside.
International Journal of Network Management | 2010
Guillaume Dewaele; Yosuke Himura; Pierre Borgnat; Kensuke Fukuda; Patrice Abry; Olivier Michel; Romain Fontugne; Kenjiro Cho; Hiroshi Esaki
A novel host behavior classification approach is proposed as a preliminary step toward traffic classification and anomaly detection in network communication. Although many attempts described in the literature were devoted to flow or application classifications, these approaches are not always adaptable to the operational constraints of traffic monitoring (expected to work even without packet payload, without bidirectionality, on high-speed networks or from flow reports only, etc.). Instead, the classification proposed here relies on the leading idea that traffic is relevantly analyzed in terms of host typical behaviors: typical connection patterns of both legitimate applications (data sharing, downloading, etc.) and anomalous (eventually aggressive) behaviors are obtained by profiling traffic at the host level using unsupervised statistical classification. Classification at the host level is not reducible to flow or application classification, and neither is the contrary: they are different operations which might have complementary roles in network management. The proposed host classification is based on a nine-dimensional feature space evaluating host Internet connectivity, dispersion and exchanged traffic content. A minimum spanning tree (MST) clustering technique is developed that does not require any supervised learning step to produce a set of statistically established typical host behaviors. Not relying on a priori defined classes of known behaviors enables the procedure to discover new host behaviors, that potentially were never observed before. This procedure is applied to traffic collected over the entire year of 2008 on a transpacific (Japan/USA) link. A cross-validation of this unsupervised classification against a classical port-based inspection and a state-of-the-art method provides assessment of the meaningfulness and the relevance of the obtained classes for host behaviors. Copyright
internet measurement conference | 2005
Matthew J. Luckie; Kenjiro Cho; Bill Owens
If a host can send packets larger than an Internet path can forward, it relies on the timely delivery of Internet Control Message Protocol (ICMP) messages advising that the packet is too big to forward. An ICMP Packet Too Big message reports the largest packet size - or Maximum Transmission Unit (MTU) - that can be forwarded to the next hop. The iterative process of determining the largest packet size supported by a path by learning the next-hop MTU of each MTU-constraining link on the path is known as Path MTU Discovery (PMTUD). It is fundamental to the optimal operation of the Internet. There is a perception that PMTUD is not working well in the modern Internet due to ICMP messages being firewalled or otherwise disabled due to security concerns. This paper provides a review of modern PMTUD failure modes. We present a tool designed to help network operators and users infer the location of a failure. The tool provides fairly detailed information about each failure, so the failure can be resolved. Finally, we provide data on the failures that occurred on a large jumbo-capable network and find that although disabling ICMP messages is a problem, many other failure modes were found.
Archive | 2005
Kenjiro Cho; Philippe Jacquet
Invited Papers.- Efficient Blacklisting and Pollution-Level Estimation in P2P File-Sharing Systems.- Building Tailored Wireless Sensor Networks.- Users and Services in Intelligent Networks.- Wireless, Mobility and Emergency Network.- MAC Protocol for Contacts from Survivors in Disaster Areas Using Multi-hop Wireless Transmissions.- Performance Evaluation of Mobile IPv6 Wireless Test-Bed in Micro-mobility Environment with Hierarchical Design and Multicast Buffering.- Prioritisation of Data Partitioned MPEG-4 Video over GPRS/EGPRS Mobile Networks.- Routing in Ad-Hoc Network.- Load Balancing QoS Multicast Routing Protocol in Mobile Ad Hoc Networks.- A Framework for the Comparison of AODV and OLSR Protocols.- Distributed Node Location in Clustered Multi-hop Wireless Networks.- Extending MANET.- A Duplicate Address Detection and Autoconfiguration Mechanism for a Single-Interface OLSR Network.- On the Application of Mobility Predictions to Multipoint Relaying in MANETs: Kinetic Multipoint Relays.- Invited Position Paper.- The Architecture of the Future Wireless Internet.- Securing Network.- Security Threats and Countermeasures in WLAN.- Securing OLSR Routes.- SPS: A Simple Filtering Algorithm to Thwart Phishing Attacks.- Multi-services in IP-Based Networks.- On the Stability of Server Selection Algorithms Against Network Fluctuations.- Application-Level Versus Network-Level Proximity.- An Adaptive Application Flooding for Efficient Data Dissemination in Dense Ad-Hoc Networks.- Measurement and Performance Analysis.- Multicast Packet Loss Measurement and Analysis over Unidirectional Satellite Network.- On a Novel Filtering Mechanism for Capacity Estimation: Extended Version.- TCP Retransmission Monitoring and Configuration Tuning on AI3 Satellite Link.- An Analytical Evaluation of Autocorrelations in TCP Traffic.
Signal Processing-image Communication | 2011
Thomas Silverston; Loránd Jakab; Albert Cabellos-Aparicio; Olivier Fourmaux; Kavé Salamatian; Kenjiro Cho
P2P-TV is an emerging alternative to classical television broadcast systems. Leveraging possibilities o ered by the Internet, several companies o er P2P-TV services to their customers. The overwhelming majority of these systems however is of closed nature, o ering little insight on their tra c properties. For a better understanding of the P2P-TV landscape, we performed measurement experiments in France, Japan, Spain, and Romania, using di erent commercial applications. By using multiple measurement points in di erent locations of the world, our results can paint a global picture of the measured networks, inferring their main properties. More precisely, we focus on the level of collaboration between peers, their location and the e ect of the tra c on the networks. Our results show that there is no fairness between peers and that is an important issue for the scalability of P2P-TV systems. Moreover, hundreds of Autonomous Systems are involved in the P2P-TV tra c and it points out the lack of locality-aware mechanisms for these systems. The geographic location of peers testifies the wide spread of these applications in Asia and highlights their worldwide usage.