Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mitsuhisa Sato is active.

Publication


Featured researches published by Mitsuhisa Sato.


Future Generation Computer Systems | 1999

Design and implementations of Ninf: towards a global computing infrastructure

Hidemoto Nakada; Mitsuhisa Sato; Satoshi Sekiguchi

Abstract The world-wide computing infrastructure on the growing computer network technology is a leading technology to make a variety of information services accessible through the Internet for every user from the high-performance computing users through many of personal computing users. The important feature of such services is location transparency; information can be obtained irrespective of time or location in virtually shared manner. In this article, we overview Ninf, an ongoing global network-wide computing infrastructure project which allows users to access computational resources including hardware, software and scientific data distributed across a wide area network. Preliminary performance result on measuring software and network overhead is shown, and that promises the future reality of world-wide network computing.


ieee international conference on high performance computing data and analytics | 1997

PM: An Operating System Coordinated High Performance Communication Library

Hiroshi Tezuka; Atsushi Hori; Yutaka Ishikawa; Mitsuhisa Sato

We have developed a new communication library, called PM, for the Myrinet gigabit LAN card, that has a dedicated processor and onboard memory to handle communication protocols. To obtain high performance communication and support multi-user environments, we have co-designed PM, an operating system implemented as a daemon process, and the run-time routine for a programming language. Several unique features, e.g., network context switching and a Modified ACK/NACK flow control algorithm, have been developed for PM.


international parallel and distributed processing symposium | 2006

Profile-based optimization of power performance by using dynamic voltage scaling on a PC cluster

Yoshihiko Hotta; Mitsuhisa Sato; Hideaki Kimura; Satoshi Matsuoka; Taisuke Boku; Daisuke Takahashi

Currently, several of the high performance processors used in a PC cluster have a DVS (dynamic voltage scaling) architecture that can dynamically scale processor voltage and frequency. Adaptive scheduling of the voltage and frequency enables us to reduce power dissipation without a performance slowdown during communication and memory access. In this paper, we propose a method of profiled-based power-performance optimization by DVS scheduling in a high-performance PC cluster. We divide the program execution into several regions and select the best gear for power efficiency. Selecting the best gear is not straightforward since the overhead of DVS transition is not free. We propose an optimization algorithm to select a gear using the execution and power profile by taking the transition overhead into account. We have built and designed a power-profiling system, PowerWatch. With this system we examined the effectiveness of our optimization algorithm on two types of power-scalable clusters (Crusoe and Turion). According to the results of benchmark tests, we achieved almost 40% reduction in terms of EDP (energy-delay product) without performance impact (less than 5%) compared to results using the standard clock frequency.


grid computing | 2010

D-Cloud: Design of a Software Testing Environment for Reliable Distributed Systems Using Cloud Computing Technology

Takayuki Banzai; Hitoshi Koizumi; Ryo Kanbayashi; Takayuki Imada; Toshihiro Hanawa; Mitsuhisa Sato

In this paper, we propose a software testing environment, called D-Cloud, using cloud computing technology and virtual machines with fault injection facility. Nevertheless, the importance of high dependability in a software system has recently increased, and exhaustive testing of software systems is becoming expensive and time-consuming, and, in many cases, sufficient software testing is not possible. In particular, it is often difficult to test parallel and distributed systems in the real world after deployment, although reliable systems, such as high-availability servers, are parallel and distributed systems. D-Cloud is a cloud system which manages virtual machines with fault injection facility. D-Cloud sets up a test environment on the cloud resources using a given system configuration file and executes several tests automatically according to a given scenario. In this scenario, D-Cloud enables fault tolerance testing by causing device faults by virtual machine. We have designed the D-Cloud system using Eucalyptus software and a description language for system configuration and the scenario of fault injection written in XML. We found that the D-Cloud system, which allows a user to easily set up and test a distributed system on the cloud and effectively reduces the cost and time of testing.


international conference on cluster computing | 2006

Emprical study on Reducing Energy of Parallel Programs using Slack Reclamation by DVFS in a Power-scalable High Performance Cluster

Hideaki Kimura; Mitsuhisa Sato; Yoshihiko Hotta; Taisuke Boku; Daisuke Takahashi

It has become important to improve the energy efficiency of high performance PC clusters. In PC clusters, high-performance microprocessors have a dynamic voltage and frequency scaling (DVFS) mechanism, which allows the voltage and frequency to be set for reduction in energy consumption. In this paper, we proposed a new algorithm that reduces energy consumption in a parallel program executed on a power-scalable cluster using DVFS. Whenever the computational load is not balanced, parallel programs encounter slack time, that is, they must wait for synchronization of the tasks. Our algorithm reclaims slack time by changing the voltage and frequency, which allows a reduction in energy consumption without impacting on the performance of the program. Our algorithm can be applied to parallel programs represented by a directed acyclic task graph (DAG). It selects an appropriate set of voltages and frequencies (called the gear) that allow the tasks to execute at the lowest frequency that does not increase the overall execution time, but at the same time allows the tasks to be executed as uniformly as possible in frequency. We built two different types of power-scalable clusters using AMD Turion and Transmeta Crusoe. For the empirical study on energy reduction in PC clusters, we designed a toolkit called PowerWatch that includes power monitoring tools and the DVFS control library. This toolkit precisely measures the power consumption of the entire cluster in real time. The experimental results using benchmark problems show that our algorithm reduces energy consumption by 25% with only a 1 % loss in performance


ieee international conference on high performance computing data and analytics | 1997

Ninf: A Network Based Information Library for Global World-Wide Computing Infrastructure

Mitsuhisa Sato; Hidemoto Nakada; Satoshi Sekiguchi; Satoshi Matsuoka; Umpei Nagashima; Hiromitsu Takagi

Ninf is an ongoing global network-wide computing infrastructure project which allows users to access computational resources including hardware, software and scientific data distributed across a wide area network. Ninf is intended not only to exploit high performance in network parallel computing, but also to provide high quality numerical computation services and accesses to scientific database published by other researchers. Computational resources are shared as Ninf remote libraries executable at a remote Ninf server. Users can build an application by calling the libraries with the Ninf Remote Procedure Call, which is designed to provide a programming interface similar to conventional function calls in existing languages, and is tailored for scientific computation. In order to facilitate location transparency and network-wide parallelism, Ninf metaserver maintains global resource information regarding computational server and databases, allocating and scheduling coarse-grained computation for global load balancing. Ninf also interfaces with the WWW browsers for easy accessibility.


international symposium on computer architecture | 1992

Thread-based programming for the EM-4 hybrid dataflow machine

Mitsuhisa Sato; Yuetsu Kodama; Shuichi Sakai; Yoshinori Yamaguchi; Yasuhito Koumura

In this paper, we present a thread-based programming model for the EM-4 hybrid dataflow machine, where parallelism and synchronization among threads of sequential execution are described explicitly by the programmer. Although EM-4 was originally designed as a dataflow machine, we demonstrate that it provides effective architectural support for a variety of programming styles, including message passing and distributed data sharing in imperative languages. Our approach allows the programmer to control the parallelism and maintain data locality explicitly to achieve high performance. EM-4 can be thought of as a multi-threaded architecture that can exploit both von Neumann and dataflow compiling technology. Thread-based programming provides the first step to explore better programming/compiling technology for a hybrid dataflow machine as well as EM-4.


international symposium on systems synthesis | 2002

OpenMP: parallel programming API for shared memory multiprocessors and on-chip multiprocessors

Mitsuhisa Sato

The OpenMP application programming interface is an emerging standard for parallel programming on shared-memory multiprocessors. Recently, OpenMP is attracting widespread interest because of its easy-to-use portable parallel programming model. In this paper, we describe a brief introduction of OpenMP API and its parallel programming. We present our Omni OpenMP complier and performance of some applications on a shared memory multiprocessor. In the end, a role of OpenMP for modern on-chip multiprocessors is discussed.


ieee international conference on high performance computing data and analytics | 2000

Performance Evaluation of the Omni OpenMP Compiler

Kazuhiro Kusano; Shigehisa Satoh; Mitsuhisa Sato

We developed an OpenMP compiler, called Omni. This paper describes a performance evaluation of the Omni OpenMP compiler. We take two commercial OpenMP C compilers, the KAI GuideC and the PGI C compiler, for comparison. Microbenchmarks and a program in Parkbench are used for the evaluation. The results using a SUN Enterprise 450 with four processors show the performance of Omni is comparable to a commercial OpenMP compiler, KAI GuideC. The parallelization using OpenMP directives is effective and scales well if the loop contains enough operations, according to the results.


international conference on software testing, verification and validation workshops | 2010

Large-Scale Software Testing Environment Using Cloud Computing Technology for Dependable Parallel and Distributed Systems

Toshihiro Hanawa; Takayuki Banzai; Hitoshi Koizumi; Ryo Kanbayashi; Takayuki Imada; Mitsuhisa Sato

Various information systems are widely used in information society era, and the demand for highly dependable system is increasing year after year. However, software testing for such a system becomes more difficult due to the enlargement and the complexity of the system. In particular, it is too difficult to test parallel and distributed systems sufficiently although dependable systems such as high-availability servers usually form parallel and distributed systems. To solve these problems, we proposed a software testing environment for dependable parallel and distributed system using the cloud computing technology, named D-Cloud. D-Cloud includes Eucalyptus as the cloud management software, and FaultVM based on QEMU as the virtualization software, and D-Cloud frontend for interpreting test scenario. D-Cloud enables not only to automate the system configuration and the test procedure but also to perform a number of test cases simultaneously, and to emulate hardware faults flexibly. In this paper, we present the concept and design of D-Cloud, and describe how to specify the system configuration and the test scenario. Furthermore, the preliminary test example as the software testing using D-Cloud was presented. Its result shows that D-Cloud allows to set up the environment easily, and to test the software testing for the distributed system.

Collaboration


Dive into the Mitsuhisa Sato's collaboration.

Top Co-Authors

Avatar

Taisuke Boku

Toyohashi University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuetsu Kodama

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshinori Yamaguchi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Satoshi Sekiguchi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge