Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daisaku Yokoyama is active.

Publication


Featured researches published by Daisaku Yokoyama.


Future Generation Computer Systems | 2013

Design and implementation of GXP make - A workflow system based on make

Kenjiro Taura; Takuya Matsuzaki; Makoto Miwa; Yoshikazu Kamoshida; Daisaku Yokoyama; Nan Dun; Takeshi Shibata; Choi Sung Jun; Jun’ichi Tsujii

This paper describes the rational behind designing workflow systems based on the Unix make by showing a number of idioms useful for workflows comprising many tasks. It also demonstrates a specific design and implementation of such a workflow system called GXP make. GXP make supports all the features of GNU make and extends its platforms from single node systems to clusters, clouds, supercomputers, and distributed systems. Interestingly, it is achieved by a very small code base that does not modify GNU make implementation at all. While being not ideal for performance, it achieved a useful performance and scalability of dispatching one million tasks in approximately 16,000 seconds (60 tasks per second, including dependence analysis) on an 8 core Intel Nehalem node. For real applications, recognition and classification of protein-protein interactions from biomedical texts on a supercomputer with more than 8,000 cores are described.


international conference on e-science | 2010

Design and Implementation of GXP Make -- A Workflow System Based on Make

Kenjiro Taura; Takuya Matsuzaki; Makoto Miwa; Yoshikazu Kamoshida; Daisaku Yokoyama; Nan Dun; Takeshi Shibata; Choi Sung Jun; Jun’ichi Tsujii

This paper describes the rational behind designing workflow systems based on the Unix make by showing a number of idioms useful for workflows comprising many tasks. It also demonstrates a specific design and implementation of such a workflow system called GXP make. GXP make supports all the features of GNU make and extends its platforms from single node systems to clusters, clouds, supercomputers, and distributed systems. Interestingly, it is achieved by a very small code base that does not modify GNU make implementation at all. While being not ideal for performance, it achieved a useful performance and scalability of dispatching one million tasks in approximately 16,000 seconds (60 tasks per second, including dependence analysis) on an 8 core Intel Nehalem node. For real applications, recognition and classification of protein-protein interactions from biomedical texts on a supercomputer with more than 8,000 cores are described.


pacific-asia conference on knowledge discovery and data mining | 2014

A Framework for Large-Scale Train Trip Record Analysis and Its Application to Passengers' Flow Prediction after Train Accidents

Daisaku Yokoyama; Masahiko Itoh; Masashi Toyoda; Yoshimitsu Tomita; Satoshi Kawamura; Masaru Kitsuregawa

We have constructed a framework for analyzing passenger behaviors in public transportation systems as understanding these variables is a key to improving the efficiency of public transportation. It uses a large-scale dataset of trip records created from smart card data to estimate passenger flows in a complex metro network. Its interactive flow visualization function enables various unusual phenomena to be observed. We propose a predictive model of passenger behavior after a train accident. Evaluation showed that it can accurately predict passenger flows after a major train accident. The proposed framework is the first step towards real-time observation and prediction for public transportation systems.


acm symposium on applied computing | 2013

Modeling I/O interference for data intensive distributed applications

Sven Groot; Kazuo Goda; Daisaku Yokoyama; Miyuki Nakano; Masaru Kitsuregawa

Data intensive applications such as MapReduce can have large performance degradation from the effects of I/O interference when multiple processes access the same I/O resources simultaneously, particularly in the case of disks. It is necessary to understand this effect in order to improve resource allocation and utilization for these applications. In this paper, we propose a model for predicting the impact of I/O interference on MapReduce application performance. Our model takes basic parameters of the workload and hardware environment, and knowledge of the I/O behavior of the application to predict how I/O interference affects the scalability of an application. We compare the models predictions for several workloads (TeraSort, WordCount, PFP Growth and PageRank) against the actual behavior of those workloads in a real cluster environment, and confirm that our model can provide highly accurate predictions.


ieee international conference on services computing | 2013

Variations in Performance Measurements of Multi-core Processors: A Study of n-Tier Applications

Junhee Park; Qingyang Wang; Deepal Jayasinghe; Jack Li; Yasuhiko Kanemasa; Masazumi Matsubara; Daisaku Yokoyama; Masaru Kitsuregawa; Calton Pu

The prevalence of multi-core processors has raised the question of whether applications can use the increasing number of cores efficiently in order to provide predictable quality of service (QoS). In this paper, we study the horizontal scalability of n-tier application performance within a multicore processor (MCP). Through extensive measurements of the RUBBoS benchmark, we found one major source of performance variations within MCP: the mapping of cores to virtual CPUs can significantly lower on-chip cache hit ratio, causing performance drops of up to 22% without obvious changes in resource utilization. After we eliminated these variations by fixing the MCP core mapping, we measured the impact of three mainstream hypervisors (the dominant Commercial Hypervisor, Xen, and KVM) on intra-MCP horizontal scalability. On a quad-core dual-processor (total 8 cores), we found some interesting similarities and dissimilarities among the hypervisors. An example of similarities is a non-monotonic scalability trend (throughput increasing up to 4 cores and then decreasing for more than 4 cores) when running a browse-only CPU-intensive workload. This problem can be traced to the management of last level cache of CPU packages. An example of dissimilarities among hypervisors is their handling of write operations in mixed read/write, I/O-intensive workloads. Specifically, the Commercial Hypervisor is able to provide more than twice the throughput compared to KVM. Our measurements show that both MCP cache architecture and the choice of hypervisors indeed have an impact on the efficiency and horizontal scalability achievable by applications. However, despite their differences, all three mainstream hypervisors have difficulties with the intra-MCP horizontal scalability beyond 4 cores for n-tier applications.


international conference on big data | 2015

Visual interface for exploring caution spots from vehicle recorder big data

Masahiko Itoh; Daisaku Yokoyama; Masashi Toyoda; Masaru Kitsuregawa

It is vital for the transportation industry, which performs most of their work by automobiles, to reduce its number of traffic accidents. Many local governments in Japan have made potential risk maps of traffic accident spots. However, making such maps in wide areas and with the time information had been difficult because most of them are made based on an investigation. Utilizing long-term driving records can extract wide area spatio-temporal caution spots. This paper proposes a visual interaction method for exploring caution spots from large-scale vehicle recorder data. Our method provides (i) a flexible filtering interface for driving operations using various combinations of attribute values such as velocity and acceleration, and (ii) a 3D visual environment for spatio-temporal exploration of caution spots. We demonstrate the usefulness of our novel visual exploration environment using real data given by one of the biggest transportation companies in Japan. Exploration results show our environments can extract caution spots where some accidents have actually occurred or that are on very narrow roads with bad visibility.


international conference on cloud computing | 2014

The Impact of Software Resource Allocation on Consolidated n-Tier Applications

Jack Li; Qingyang Wang; Chien An Lai; Junhee Park; Daisaku Yokoyama; Calton Pu

Consolidating several under-utilized user applications together to achieve higher utilization of hardware resources is important for cloud vendors to reduce cost and maximize profit. In this paper, we study the impact of tuning software resources (e.g., server thread pool size or connection pool size) on n-tier web application performance in a consolidated cloud environment. By measuring CPU utilizations and performance of two consolidated n-tier web application benchmark systems running RUBBoS, we found significant differences depending on the amount of soft resources allocated. When the two systems have different soft resource allocations and are fully utilized, the application with more software resources may steal up to 8% CPU from the co-resident application. Further analysis shows that the CPU stealing is due to more threads being scheduled for the system with higher software resources. By limiting the number of runnable active threads for the consolidated VMs, we were able to mitigate the performance interference. More generally, our results show that careful software resource allocation is a significant factor when deploying and tuning n-tier application performance in clouds.


software engineering, artificial intelligence, networking and parallel/distributed computing | 2013

A System-Design Outline of the Distributed-Shogi-System Akara 2010

Kunihito Hoki; Tomoyuki Kaneko; Daisaku Yokoyama; Takuya Obata; Hiroshi Yamashita; Yoshimasa Tsuruoka; Takeshi Ito

This paper describes Akara 2010, the distributed shogi system that has defeated a professional shogi player in a public game for the first time in history. The system employs a novel design to build a high-performance computer shogi player for standard tournament conditions. The design enhances the performance of the entire system by means of distributed computing. To utilize a large number of computers, a majority-voting method using four existing programs is combined with a distributed-search method. Although the performance of the entire system could not be tested, the majority-voting component increased the winning percentage from 62% to 73%, and the distributed-search component increased it from 50% to 70% or more.


Journal of Information Processing | 2013

Two-level Task Scheduling for Parallel Game Tree Search Based on Necessity

Akira Ura; Daisaku Yokoyama; Takashi Chikayama

It is difficult to fully utilize the parallelism of large-scale computing environments in alpha-beta search. The naive parallel execution of subtrees would result in much less task pruning than may have been possible in sequential execution. This may even degrade total performance. To overcome this difficulty, we propose a two-level task scheduling policy in which all tasks are classified into two priority levels based on the necessity for their results. Low priority level tasks are only executed after all high priority level tasks currently executable have started. When new high priority level tasks are generated, the execution of low priority level tasks is suspended so that high level tasks can be executed. We suggest tasks be classified into the two levels based on the Young Brothers Wait Concept, which is widely used in parallel alpha-beta search. The experimental results revealed that the scheduling policy suppresses the degradation in performance caused by executing tasks whose results are eventually found to be unnecessary. We found the new policy improved performance when task granularity was sufficiently large.


discovery science | 2006

Automatic construction of static evaluation functions for computer game players

Makoto Miwa; Daisaku Yokoyama; Takashi Chikayama

Constructing evaluation functions with high accuracy is one of the critical factors in computer game players. This construction is usually done by hand, and deep knowledge of the game and much time to tune them are needed for the construction. To avoid these difficulties, automatic construction of the functions is useful. In this paper, we propose a new method to generate features for evaluation functions automatically based on game records. Evaluation features are built on simple features based on their frequency and mutual information. As an evaluation, we constructed evaluation functions for mate problems in shogi. The evaluation function automatically generated with several thousand evaluation features showed the accuracy of 74% in classifying positions into mate and non-mate.

Collaboration


Dive into the Daisaku Yokoyama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Makoto Miwa

Toyota Technological Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge