Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michitaka Okuno is active.

Publication


Featured researches published by Michitaka Okuno.


IEICE Transactions on Communications | 2006

100-Gb/s Physical-Layer Architecture for Next-Generation Ethernet

Hidehiro Toyoda; Shinji Nishimura; Michitaka Okuno; Kouji Fukuda; K. Nakahara; Hiroaki Nishi

A high-speed physical-layer architecture for Ethernet is described that supports 100-Gb/s throughput and 40-km transmission, making it well suited for next-generation metro-area and intrabuilding networks. Its links comprise 12 x 10-Gb/s synchronized parallel optical lanes. Ethernet data frames are transmitted by coarse wavelength division multiplexing link and bundled optical fibers. Ten of the lanes convey 640-bit data synchronously (64 bits x 10 lanes). One conveys forward error correction code ((132b, 140b) Hamming code), providing highly reliable (BER < 10 -12 ) data transmission, and the other conveys parity data, enabling faultlane recovery. A newly developed 64B/66B code-sequence-based deskewing mechanism is used that provides low-latency compensation for the lane-to-lane skew, which is less than 88 ns. Testing of this physical-layer architecture in a field programmable gate array circuit demonstrated that it can provide 100-Gb/s data communication with a 590k gate circuit, which is small enough for implementation in a single LSI circuit.


international conference on communications | 2005

A 100-Gb-Ethernet subsystem for next-generation metro-area network

Hidehiro Toyoda; Shinji Nishimura; Michitaka Okuno; Ryouji Yamaoka; Hiroaki Nishi

An ultra high-speed Ethernet subsystem, which realizes 100-Gb/s throughput and transmission up to 40 km, is examined for next-generation metro-area networks. A parallel link of 12 10-Gb/s synchronized parallel optical lanes is proposed. The 10 optical lanes are used to transmit 10-bit parallel data. The one of redundant lanes transmits a forward error correction code ((132b, 140b) Hamming code) to achieve highly-reliable (BER < 10-12) data transmission, and the other lane transmits a parity data used for the fault-lane recovery. Here, a 64B/66B code-sequence-based de-skewing mechanism is proposed, and its effectiveness to realize low-latency compensation of the inter-lane skew (< 80 ns) is shown. We have implemented the 100-Gb-Ethernet interface architectures into FPGA circuits, and confirmed the performance of 100 Gb/s data communication with compact 385-kgates circuit size, which is practically small for implementation in a single LSI circuit.


IEICE Transactions on Electronics | 2006

Cache-based network processor architecture : Evaluation with real network traffic

Michitaka Okuno; Shinji Nishimura; Shin Ichi Ishida; Hiroaki Nishi

A novel cache-based network processor (NP) architecture that can catch up with next generation 100-Gbps packet-processing throughput by exploiting a nature of network traffic is proposed, and the prototype is evaluated with real network traffic traces. This architecture consists of several small processing units (PUs) and a bit-stream manipulation hardware called a burst-stream path (BSP) that has a special cache mechanism called a process-learning cache (PLC) and a cache-miss handler (CMH). The PLC memorizes a packet-processing method with all table-lookup results, and applies it to subsequent packets that have the same information in their header. To avoid packet-processing blocking, the CMH handles cache-miss packets while registration processing is performed at the PLC. The combination of the PLC and CMH enables most packets to skip the execution at the PUs, which dissipate huge power in conventional NPs. We evaluated an FPGA-based prototype with real core network traffic traces of a WIDE backbone router. From the experimental results, we observed a special case where the packet of minimum size appeared in large quantities, and the cache-based NP was able to achieve 100% throughput with only the 10%-throughput PUs due to the existence of very high temporal locality of network traffic. From the whole results, the cache-based NP would be able to achieve 100-Gbps throughput by using 10- to 40-Gbps throughput PUs. The power consumption of the cache-based NP, which consists of 40-Gbps throughput PUs, is estimated to be only 44.7% that of a conventional NP.


international conference on communications | 2008

A 100 Gb/s and High-Reliable Physical-Layer Architecture for VSR and Backplane Ethernet

Hidehiro Toyoda; Michitaka Okuno; Shinji Nishimura; Matsuaki Terada

A high-throughput and high-reliable physical-layer architecture for very-short-reach (VSR) and backplane Ethernet applications was developed. VSR and backplane networks provide 100-Gb/s data transmission between blade servers and LAN switches. This architecture supports 100-Gb/s-throughput, high-reliability, and low-latency data transmission, making it well suited to VSR and backplane applications for inter-LAN-switch and intra-cabinet networks. Its links comprise ten 10-Gb/s highspeed serial lanes. Payload data are transmitted by a ribbon fiber and a copper cable for VSR applications and by copper channels for the backplane board. Ten lanes convey Ethernet data frames and parity data of forward-error correction code (newly developed (544, 512) code FEQ, providing highly reliable (BER<lE-22) data transmission with a burst-error correction with low latency (i.e., 29.0 ns on the transmitter (Tx) side and 104.4 ns on the receiver (Rx) side). A 64B/66B code-sequence-based skew compensation mechanism, which provides low-latency compensation for the lane-to-lane skew, is used for multi-lane serial transmission. Testing an ASIC with this physical-layer architecture showed that it can provide 100-Gb/s data transmission with a 747-kgate circuit, which is small enough to be implemented in a single LSI. Furthermore in this paper, insufficient FEC rate in past report was solved, and a technique for improving reliability of the above compensation mechanism was proposed.


IEICE Transactions on Electronics | 2005

Low-Power Network-Packet-Processing Architecture Using Process-Learning Cache for High-End Backbone Router

Michitaka Okuno; Shin Ichi Ishida; Hiroaki Nishi

A novel cache-based packet-processing-engine (PPE) architecture that achieves low-power consumption and high packet-processing throughput by exploiting the nature of network traffic is proposed. This architecture consists of a processing-unit array and a bit-stream manipulation path called a burst stream path (BSP) that has a special cache mechanism called a process-learning cache (PLC). Network packets, which have the same information in their header, appear repeatedly over a short time. By exploiting that nature, the PLC memorizes the packet-processing method with all results (i.e., table lookups), and applies it to other packets. The PLC enables most packets to skip the execution at the processing-unit array, which consumes high power. As a practical implementation of the cache-based PPE architecture, P-Gear was designed. In particular, P-Gear was compared with a conventional PPE in terms of silicon die size and power consumption. According to this comparison, in the case of current 0.13-μm CMOS process technology, P-Gear can achieve 100-Gbps (gigabit per second) packet-processing throughput with only 36.5% of the die size and 32.8% of the power consumption required by the conventional PPE. Configurations of both architectures for the 1- to 100-Gbps throughput range were also analyzed. In the throughput range of 10-Gbps or more, P-Gear can achieve the target throughput in a smaller die size than the conventional PPE. And for the whole throughput range, P-Gear can achieve a target throughput at lower power than the conventional PPE.


high performance switching and routing | 2006

Evaluation of cache base network processor by using real backbone network trace

Shinichi Ishida; Michitaka Okuno; Hiroaki Nishi

In this paper a novel cache-based packet-processing-engine (PPE) architecture that achieves high packet-processing throughput with low-power consumption is proposed and evaluated. As network packets of the same header information appear repeatedly in a short time, a special cache, the so called header-learning cache (HLC), memorizes the packet-processing method and enables most packets to skip the execution at the processing units array. The implementation of the cache-based PPE architecture, P-Gear, was designed. Real backbone network trace was used to evaluate the performance of it. This P-Gear can achieve over 80 % cache hit rate using 4K/32K entry for access/core networks. Compared to conventional PPE, P-Gear can achieve 100-Gbps (gigabit per second) packet-processing throughput with only 36.5% of the die size and 32.6% of the power consumption required by the conventional PPE


international conference on photonics in switching | 2009

Implementation of optical switch control function in Active Optical Access System

Koji Wakayama; Michitaka Okuno; Daisuke Mashimo; Jun Sugawa; Hiroki Ikeda; Kenichi Sakamoto

We propose an optical switch control procedure in Active Optical Access System in which optical switches are used instead of optical splitters in PON (Passive Optical Network). An OLT (Optical Line Terminal) determines the switching schedules of optical switches on OSW (Optical Switching Unit), and informs an OSW of them with a switch control frame on the proposed procedure. We demonstrate the proposed procedure works effectively with logic circuits implemented on OLT and OSW.


Archive | 2006

Traffic control method for network equipment

Michitaka Okuno


Archive | 2006

Packet transfer apparatus

Michitaka Okuno


Archive | 2005

Network-processor accelerator

Michitaka Okuno

Collaboration


Dive into the Michitaka Okuno's collaboration.

Researchain Logo
Decentralizing Knowledge