Hiroshi Fujinoki
Southern Illinois University Edwardsville
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hiroshi Fujinoki.
local computer networks | 1999
Hiroshi Fujinoki; Kenneth J. Christensen
This paper presents the new shortest best path tree (SBPT) algorithm for multicast trees. The SBPT algorithm establishes and maintains dynamic multicast trees which maximize the bandwidth to be shared by multiple receivers and simultaneously guarantee the shortest paths for each receiver node. The SBPT algorithm is a distributed algorithm with cost in the same order as the sum of the shortest path tree (SPT) and Greedy algorithms. The SBPT algorithm reduces bandwidth consumption by utilizing partial paths already established for other multicast receiver nodes. The SBPT algorithm finds such partial paths when multiple shortest paths exist. Simulation experiments comparing the SBPT and SPT algorithms show that the SBPT algorithm reduces bandwidth consumption by 5% to 17% when node utilization is greater than approximately 25% and always achieves the same shortest path lengths.
international conference on networks | 2008
Hiroshi Fujinoki
Border Gateway Protocol version 4 (BGP-4) is the routing protocol for inter-domain routing in the Internet. Although BGP-4 is a scalable distributed routing protocol, BGP propagates only the selected best path for a destination to other autonomous systems. This property and long convergence delay in BGP have been known to cause serious inefficiency in network resource utilization and vulnerability to link failures. This paper proposes and describes a new routing protocol, MBGP (Multi-path BGP), to solve the problems by dynamically utilizing concurrent multiple BGP paths in today¿s Internet without routing loops. MBGP is designed to co-exist with the existing BGP routers. Performance analysis indicates that MBGP has O(N) processing overhead for each MBGP message. Concurrent inter-domain multi-path routing by MBGP with these advantages will enhance the efficiency in the future Internet.
Computer Communications | 2000
Hiroshi Fujinoki; Kenneth J. Christensen
The new Path Length Control (PLC) algorithm establishes and maintains multicast trees which maximize the bandwidth to be shared by multiple receivers and which satisfy the maximum path length bounds for each receiver. The PLC algorithm can be implemented as a distributed algorithm, can tradeoff end-to-end delay and bandwidth consumption, and can be implemented for polynomial time execution. Analysis and simulation show that (a) the PLC algorithm generates multicast trees which consume less bandwidth than those generated by the SPT algorithm while guaranteeing the same shortest path length and (b) consume less bandwidth than trees generated by the Greedy algorithm with only a moderate increase in path length. The PLC algorithm is more flexible and has a lower cost than a combined SPT and Greedy algorithm.
availability, reliability and security | 2009
Hiroshi Fujinoki
Multi-homing is a network configuration that connects a customer network to multiple service providers. It is used to improve fault-tolerance and throughput. One of its problems is the lack of dynamic load-balancing for inbound network traffic to multi-homed networks, which prohibits us from taking advantage of multi-homing to improve reliability for inbound network traffic. This paper proposes a new routing architecture and a protocol, BGP-MHLB/I (BGP-Multi-Home Load Balancing/Inbound), to realize dynamic load-balancing for inbound traffic to multi-homed networks. In MHLB/I routing, reliability will be improved by a factor of (m ´ n), where m is the number of multiple BGP paths available between two end customer networks and n stands for the degree of multi-homing. Our analysis found approximately 80 multiple BGP paths available between two customer networks for up to two extra AS-hop paths. This finding suggests that the proposed BGP-MHLB/I routing will be an effective solution for improving reliability in the Internet.
local computer networks | 2002
Hiroshi Fujinoki; Kiran K. Gollamudi
An ongoing research activity to reduce response time in Web file transmissions is presented. A new transmission technique, object packaging, is proposed for reduced response time for Web browsing. Object packaging aims to reduce overhead not only at routers on a transmission path but at a transmitting Web serve for better response time. The object packaging intends to address two different issues in Web file transmissions; (1) disk access overheads and (2) protocol processing overheads. Experimental study showed that the file transmissions by object packaging reduced response time, number of transferred bytes and the packets by 34.7, 40.1 and 7.1%, respectively, for files with their average file size being 10 K bytes, which is the average file size in the Web traffic in the current Internet.
international conference on ultra modern telecommunications | 2009
Hiroshi Fujinoki
We analyzed how reliability, defined as the probability of continuing transmissions on link failures, will be affected by inter-domain multi-path and multi-homing routing when the structure in the future Internet changes. The goal of this project is to find properties for the ideal network structure that maximizes the advantage of multi-path and multi-home routing. We focused on how each end-to-end path is built, how many multi-paths exist and how each multi-path consists of no multi-path and multi-path segments. The results of the analyses showed that multi-path and multi-home routing can improve the reliability by 10 to 30% in absolute probability of survivals on link failures, which were two to six times better than the existing no multi-path and no multi-home routing. After the analyses, several interesting properties are identified. It is important to keep the path length short to maximize the benefit from multi-path routing but a large number of multi-paths nor multi-homing connections are not necessary. The multi-homing configurations of degree three improved the reliability up to 50% for link failure rate up to 50% compared to degree of two in our analyses. It is found that single-path edge sections in a path should be short for multi-path routing. The results of this analysis can be a guide in structuring the future Internet.
Computer Communications | 2001
Hiroshi Fujinoki; Kenneth J. Christensen
The new Directed Reverse Path Join (DRPJ) protocol efficiently implements a Greedy routing algorithm for generating a multicast tree. The DRPJ protocol minimizes the messaging overhead from probe messages and allows a joining node to find multiple paths that are not constrained to be only the shortest paths. This enables a controllable tradeoff between path length and bandwidth consumption. Using simulation, the DRPJ protocol is compared to the existing Flooding with TTL (Time-To-Live) and Directed Spanning Join (DSJ) protocols. Using a topology model of the current Internet, it is found that the DRPJ protocol reduces probe messages by nearly 90 and 75% when compared to the Flooding with TTL and DSJ protocols, respectively.
network and system support for games | 2006
Hiroshi Fujinoki
This paper presents our ongoing research activity to design and implement a framework for an networked virtual environment (NVE) that efficiently supports both hardware and software heterogeneity. In the proposed framework, three new techniques, application layer multicast transmission-rate pruning, fairness control for delay-sensitive activities (token-bucket algorithm) and bandwidth compensation by a combination of server-side and client-side dead reckoning, are designed, proposed and integrated in the new framework that supports heterogeneous networks and end systems.
local computer networks | 2003
Hiroshi Fujinoki; Murugesan Sanjay; Chintan Shah
Recently, the largest concern for corporate owners of Web servers is how to minimize the response time. In order to minimize the response time, a new efficient file transmission technique for World-Wide-Web, called Web file transmission by object packaging, is proposed. The prototype design and preliminary performance evaluation of object packaging have been presented in the 27th conference on local computer networks. In this paper, the performance of object packaging is compared to HTTP 1.0 and HTTP 1.1 persistent connection in terms of the server and client response time and CPU workload by experiment method. It is found that object packaging reduces the response time and CPU workload at the server by up to 92.0 and 91.1% from HTTP 1.0 and 64.5 and 82.4% from HTTP 1.1 persistent connection, respectively. From the results of the experiments, it is found that object packaging will be an efficient technique to minimize the response time in transferring web files. The results of the experiments also suggest the significance of disk I/O overhead for Web servers in high speed networks.
ieee international conference on cloud computing technology and science | 2016
Alexander A. Towell; Hiroshi Fujinoki
This paper applies an approach of resilience engineering in studying how effective encrypted searches will be. One of the concerns on encrypted searches is frequency attacks. In frequency attacks, adversaries guess the meaning of the encrypted words by observing a large number of encrypted words in search queries and mapping the encrypted words to guessed plain text words using their known histogram. Thus, it is important for defenders to know how many encrypted words adversaries need to observe before they correctly guess the encrypted words. However, doing so takes long time for defenders because of the large volume of the encrypted words involved. We developed and evaluated Moving Average Bootstrap (MAB) method for estimating the number of encrypted words (N*) an adversary needs to observe before an adversary correctly guesses a certain percentage of the observed words with a certain confidence. Our experiments indicate that MAB method lets defenders to estimate N* using only 5% of the time, compared to the cases without MAB. Because of the significant reduction in the required time for estimating N*, MAB will contribute to the safety in encrypted searches.