Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kunihiro Taniguchi.
integrated network management | 2007
Shinji Nakadai; Kunihiro Taniguchi
Web sites occasionally experience sharp fluctuation in load. The quality of such services can be maintained by allocating servers according to the load. Such autonomic service level management requires server capacity planning. However, existing capacity planning functions cannot appropriately calculate capacity in a heterogeneous server cluster, nor can they facilitate the prioritizing of services. As a result, high-priority services may deteriorate, while the quality of low-priority services remains high. Our approach achieves appropriate capacity planning for a heterogeneous server cluster by the resolution of integer programming induced by a certain status in a system model The status is specified by the consideration of a weighted round-robin algorithm of a managed load balancer. The priority allocation function is facilitated by fuzzy control.
International Workshop on Network and Operating Systems Support for Digital Audio and Video | 1995
Hiroshi Kitamura; Kunihiro Taniguchi; Hiromitsu Sakamoto; Takeshi Nishida
A new OS architecture, referred to as Zero-copy architecture, for high performance communication is proposed. This architecture dissolved memory copy bottleneck that is a major overhead in protocol processing. This can reduce CPU processing overhead, in addition to realizing high speed data communication. This architecture is shown to be suitable for large volume of data communication like video and image transfer.
advanced information networking and applications | 2013
Toru Osuga; Takayoshi Asakura; Kunihiro Taniguchi
The emergence of high quality video streaming services over the Internet has caused significant increases in the volume of traffic. Until now, cache technologies that have deployed frequently requested content in the proximity of users to mitigate traffic have been widely deployed on the Internet to improve user experiences and make them more effective. However, even when a cached object becomes a candidate for eviction due to lower access frequency, the object cannot be evicted since it is locked in working status until delivery is completed. This degrades the cache hit ratio and it has recently become much more common in high access traffic environments. This paper proposes a new method of replacing caches, called scheduled eviction of lingering caches (SELC). SELC selects an eviction candidate object based on replacement latency and shuts new access out by redirecting the access to an origin server so as to unlock the object by completing delivery for existing requests. Consequently, SELC can preserve cache space for emerging content. The results we obtained from simulations indicated that the proposed method could improve the cache hit ratio of crowded cache servers in an environment of rapidly changing access frequency.
pacific rim conference on communications, computers and signal processing | 2007
Eiji Takahashi; Toru Osuga; Kunihiro Taniguchi; Naoki Wakamiya
Point-to-multipoint bulk data transfer using application-layer multicast (ALM) is discussed. In ALM, each host participating in an application session makes copies of received data by application-layer manipulation and forwards the copies to other hosts via unicast connections. Therefore, the performance, especially for access link speed, of hosts strongly affects overall delivery performance. In heterogeneous access environments, there is a strong possibility that some hosts with low-speed links decrease overall delivery performance. However, many previous solutions constructed ALM trees by introducing arbitrary degree allocations and by not well considering a diversity of access link speeds of hosts. This paper proposes an algorithm to construct an efficient ALM tree from hosts with heterogeneous access lines. This algorithm minimizes delivery completion time and lets many hosts complete reception of all data as fast as possible. The effectiveness of the algorithm is verified using simulations.
Archive | 1993
Kunihiro Taniguchi; Hiroshi Suzuki; Takeshi Nishida
High-performance internetworking processors are inevitable in realizing multimedia communications on gigabit internets. This paper compares various alternative architecture in term of performance and functionality, and proposes an optimal architecture of high-performance internetworking processors, referred to as INP (Internetwork Nodal Processors), which can forward packets at the network layer protocol. INPs are basically composed of two main components: 1) several network I/F modules which perform network layer functions, and 2) a data forwarding module which interconnects the network I/F modules and provides message paths between any pair of them. First, three types of INP architecture are compared in considering allocation methods for various tables used for route decision. Next, showing performance bottleneck caused by a bus based INP architecture, INPs using a switch for data forwarding modules are analyzed. For the switch architecture, three alternative packet buffering schemes are compared. Through these comparisons, this paper proposes that a combination of the table allocation fully distributed on network I/F modules and an input-output buffer switch for a data forwarding module is best-suited for high-performance INPs.
Archive | 2005
Wen-Syan Li; Kunihiro Taniguchi; Atsuhiro Tanaka
network and operating system support for digital audio and video | 1995
Hiroshi Kitamura; Kunihiro Taniguchi; Hiromitsu Sakamoto; Takeshi Nishida
IEICE Transactions on Communications | 1995
Takeshi Nishida; Kunihiro Taniguchi
Archive | 2008
Kunihiro Taniguchi
Archive | 1998
Takeshi Nishida; Kunihiro Taniguchi