Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kento Aida is active.

Publication


Featured researches published by Kento Aida.


ieee international conference on high performance computing data and analytics | 2000

Performance Evaluation Model for Scheduling in Global Computing Systems

Kento Aida; Atsuko Takefusa; Hidemoto Nakada; Satoshi Matsuoka; Satoshi Sekiguchi; Umpei Nagashima

Striking progress of network technology is enabling high performance global computing, in which computational and data resources in a wide-area network (WAN) are transparently employed to solve large-scale problems. Several high performance global computing systems, such as Ninf, NetSolve, RCS, Legion, and Globus, have already been proposed. Each of these systems proposes to effectively achieve high performance with some efficient scheduling scheme, whereby a scheduler selects a set of appropriate computing resources that solve the client’s computational problem. This paper proposes a performance evaluation model for effective scheduling in global computing systems. The proposed model represents a global computing system by a queuing network, in which servers and networks are represented by queuing systems. Verification of the proposed model and evaluation of scheduling schemes on the model showed that the model could simulate behavior of an actual global computing system and scheduling on the system effectively.


symposium on applications and the internet | 2010

Applying Double-Sided Combinational Auctions to Resource Allocation in Cloud Computing

Ikki Fujiwara; Kento Aida; Isao Ono

We believe that a market-based resource allocation will be effective in a cloud computing environment where resources are virtualized and delivered to users as services. We propose such a market mechanism to allocate services to participants efficiently. The mechanism enables users (1) to order a combination of services for workflows and co-allocations and (2) to reserve future/current services in a forward/spot market. The evaluation shows that the mechanism works well in probable setting.


cluster computing and the grid | 2003

Distributed computing with hierarchical master-worker paradigm for parallel branch and bound algorithm

Kento Aida; Wataru Natsume; Yoshiaki Futakata

This paper discusses the impact of the hierarchical master-worker paradigm on performance of an application program, which solves an optimization problem by a parallel branch and bound algorithm on a distributed computing system. The application program, which this paper addresses, solves the BMI Eigenvalue Problem, which is an optimization problem to minimize the greatest eigenvalue of a bilinear matrix function. This paper proposes a parallel branch and bound algorithm to solve the BMI Eigenvalue Problem with the hierarchical master-worker paradigm. The experimental results showed that the conventional algorithm with the master-worker paradigm significantly degraded performance on a Grid test bed, where computing resources were distributed on WAN via a firewall; however, the hierarchical master-worker paradigm sustained good performance.


job scheduling strategies for parallel processing | 2000

Effect of Job Size Characteristics on Job Scheduling Performance

Kento Aida

A workload characteristic on a parallel computer depends on an administration policy or a user community for the computer system. An administrator of a parallel computer system needs to select an appropriate scheduling algorithm that schedules multiple jobs on the computer system efficiently. The goal of the work presented in this paper is to investigate mechanisms how job size characteristics affect job scheduling performance. For this goal, this paper evaluates the performance of job scheduling algorithms under various workload models, each of which has a certain characteristic related to the number of processors requested by a job, and analyzes the mechanism for job size characteristics that affect job scheduling performance significantly in the evaluation. The results showed that: (1) most scheduling algorithms classified into the first-fit scheduling showed best performance and were not affected by job size characteristics, (2) certain job size characteristics affected performance of priority scheduling significantly. The analysis of the results showed that the LJF algorithm, which dispatched the largest job first, would perfectly pack jobs to idle processors at high load, where all jobs requested power-of-two processors and the number of processors on a parallel computer was power-of-two.


job scheduling strategies for parallel processing | 1998

Job Scheduling Scheme for Pure Space Sharing Among Rigid Jobs

Kento Aida; Hironori Kasahara; Seinosuke Narita

This paper evaluates the performance of job scheduling schemes for pure space sharing among rigid jobs. Conventional job scheduling schemes for the pure space sharing among rigid jobs have been achieved by First Come First Served (FCFS). However, FCFS has a drawback such that it can not utilize processors efficiently. This paper evaluates the performance of job scheduling schemes that are proposed to alleviate the drawback of FCFS by simulation, performance analysis and experiments on a real multiprocessor system. The results showed that Fit Processors First Served (FPFS), which searches the job queue and positively dispatches jobs that fit idle processors, was more effective and more practical than others.


symposium on applications and the internet | 2005

A case study in running a parallel branch and bound application on the grid

Kento Aida; Tomotaka Osumi

This paper presents a case study to effectively run a parallel branch and bound application on the grid. The application discussed in this paper is a fine-grain application and is parallelized with the hierarchical master-worker paradigm. This hierarchical algorithm performs master-worker computing in two levels, computing among PC clusters on the grid and that among computing nodes in each PC cluster. This hierarchical manner reduces communication overhead by localizing frequent communication in tightly coupled computing resources, or a single PC cluster. The algorithm is implemented on a grid testbed by using GridRPC middleware, Ninf-G and Ninf. In the implementation, communication among PC clusters is securely performed via Ninf-G, which uses grid security service on the globus toolkit, and communication among computing nodes in each PC cluster is performed via Ninf which enables fast invocation of remote computing routines. The experimental results showed that implementation of the application with the hierarchical master-worker paradigm using a combination of Ninf-G and Ninf effectively utilized computing resources on the grid testbed in order to run the fine-grain application, where the average computation time of the single task was less than 1[sec].


high performance distributed computing | 1998

A performance evaluation model for effective job scheduling in global computing systems

Kento Aida; Atsuko Takefusa; Hidemoto Nakada; Satoshi Matsuoka; Umpei Nagashima

The paper proposes a performance evaluation model for effective job scheduling in global computing systems. The proposed model represents a global computing system by a queueing network, in which servers and networks are represented by queueing systems. Evaluation of the proposed model showed that the model could simulate behavior of an actual global computing system and job scheduling on the system effectively.


international conference on computational science and its applications | 2009

Parameter-Less GA Based Crop Parameter Assimilation with Satellite Image

Shamim Akhter; Keigo Sakamoto; Yann Chemin; Kento Aida

Crop Assimilation Model (CAM) predicts the parameters of agrohydrological models with satellite images. CAM with double layers GA called CAM-DLGA, uses Soil-Water-Atmosphere-Plant (SWAP) agro-hydrological model and Genetic Algorithm (GA) to estimate inversely the model parameters. In CAM-DLGA, initially the GA parameters are required to set in advanced, and this replicates an evolutionary searching issue. In this paper, we are presenting a new methodology to use Parameter-Less GA (PLGA), so that the GA initial parameters will be generated and assigned automatically. Numerous experiments have been accomplished to analyze the performance of the proposed model. Additionally, the effectiveness of PLGA on the assimilation has been traced on both synthetic and real satellite data. The experimental study proved that the PLGA approach provides relatively better result on the assimilation.


Proceedings of International Symposium on Grids and Clouds (ISGC) 2017 — PoS(ISGC2017) | 2017

A Method for Remote Initial Vetting of Identity with PKI Credential

Eisaku Sakane; Takeshi Nishimura; Kento Aida

With the growth of large-scale distributed computing infrastructures, a system that enables researchers -- not only international collaborative research projects but also small research groups -- to use high performance computing resources in such infrastructures is established. For the computing resource use system which invites researchers in the world to submit the research proposal, it is tough to carry out initial vetting of identity based on a face-to-face meeting at a window for the system if the researcher whose proposal is accepted lives in a foreign country. The purpose of this paper is to propose a method to solve the difficulty of initial vetting of identity for a remote user. An identity management (IdM) system vets the identity and reality of a user by checking the beforehand registered personal information against the identity documents. After the identity vetting, the user can obtain a credential used in the infrastructure. Suppose that the IdM system(A) needs to initially vet the identity of a user and that the user already possesses a credential issued by the other IdM system(B). The basic idea of this paper is that the IdM system(A) uses the credential issued by the IdM system(B) for the initial identity vetting if the level of assurance of the IdM system(B) is the same as or higher than the IdM system(A). However, the IdM system(A) cannot always check the identity against the attribute information provided by the credential. In a trust federation, the IdM system will be able to finish vetting the identity by making reference to the other IdM system that issued the credential for the necessary and sufficient identity data. As the credential handled in this paper, we focus on Public Key Infrastructure (PKI) credentials that often used in large-scale high performance computing environments. We discuss necessary condition and procedure for ensuring that the remote initial vetting of identity with a PKI credential is the same assurance as the one based on a face-to-face meeting. The proposed method can be introduced to an existing PKI without large changes. The basic idea of the proposed method can be also applied to an infrastructure based on another authentication technology. The applicability of the basic idea is also considered.


Proceedings of International Symposium on Grids and Clouds (ISGC) 2016 — PoS(ISGC 2016) | 2017

A Study of Certification Authority Integration Model in a PKI Trust Federation on Distributed Infrastructures for Academic Research

Eisaku Sakane; Takeshi Nishimura; Kento Aida

Among certification authorities (CAs) in an academic PKI trust federation such as IGTF (Interoperable Global Trust Federation), most of the academic organizations that operate a CA install by themselves the CA equipment in their building. To keep the CA trustworthy, it is necessary to maintain specialized CA equipment and to employ specifically trained operators. The high cost thereby incurred for CA operation weighs heavily on the CA organization. For research institutes whose primary duties are not the CA operation, the burden of the high cost of CA operations is an earnest problem, and cost reduction by increasing the efficiency of the operation is an important issue. Instead of focusing on any further operational optimization of a single individual CA, in this paper we will review cost reductions by way of integrating more than one CA in a PKI federation. This paper considers the issuing and registration authorities that constitute a CA, and proposes the following integration model: it integrates the issuing duties, and each organization carries out the registration duties as before. In the proposed model, integrating the issuing duties means that one issuing authority (IA) takes over the duty of the other IA. Since each registration authority (RA) performs the registration duty as usual, most of procedures such as the application process to obtain certificates remain unchanged, so that it does not confuse the users. Based on this proposed model, we discuss how to connect the superseding IA with the RA(

Collaboration


Dive into the Kento Aida's collaboration.

Top Co-Authors

Avatar

Shamim Akhter

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hidemoto Nakada

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eisaku Sakane

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar

Atsuko Takefusa

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Keigo Sakamoto

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Umpei Nagashima

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kiyoshi Osawa

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Motokazu Nishimura

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Satoshi Matsuoka

Huazhong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge