Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Haruo Yokota is active.

Publication


Featured researches published by Haruo Yokota.


international conference on data engineering | 1999

Fat-Btree: an update-conscious parallel directory structure

Haruo Yokota; Yasuhiko Kanemasa; Jun Miyazaki

We propose a parallel directory structure, Fat-Btree, to improve high speed access for parallel database systems in shared nothing environments. The Fat-Btree has a threefold aim: to provide an indexing mechanism for fast retrieval in each processor; to balance the amount of data among distributed disks, and to reduce synchronization costs between processors during update operations. We use a probability based model to compare the throughput and response time of the Fat-Btree with two ordinary parallel Btree structures, with copies of a whole Btree in each processor and storing index nodes in a processor. The comparison results indicate that the Fat-Btree is suitable for actual parallel database systems that accept update operations.


international workshop on research issues in data engineering | 2004

THROWS: an architecture for highly available distributed execution of Web services compositions

Neila Ben Lakhal; Takashi Kobayashi; Haruo Yokota

Web services emergence has triggered extensive research efforts. Currently, there is a trend towards deploying business processes as an orchestration of Web services compositions. Given that Web services are inherently loosely-coupled and are primarily built independently, they are most likely to have characteristics (e.g., transaction support, failure recovery, access policies) that might not be compliant with each other. It follows that guaranteeing the reliability and availability of the obtained Web services compositions is a challenging issue. Aligned with this tendency, we focus on the availability and reliability of Web services compositions. Specifically, in this paper, we propose THROWS, an architecture for highly available distributed execution of Web services compositions. In THROWS architecture, the execution control is hierarchically delegated among dynamically discovered engines. The progress of the compositions execution by several distributed engines is continuously captured. Moreover, the Web services compositions executed through the architecture we propose are previously specified as an hierarchy of arbitrary-nested transactions. These transactions execution is provided with retrial and compensation mechanisms which allow the highly available Web services compositions execution.


very large data bases | 2009

FENECIA: failure endurable nested-transaction based execution of composite Web services with incorporated state analysis

Neila Ben Lakhal; Takashi Kobayashi; Haruo Yokota

Interest in the Web services (WS) composition (WSC) paradigm is increasing tremendously. A real shift in distributed computing history is expected to occur when the dream of implementing Service-Oriented Architecture (SOA) is realized. However, there is a long way to go to achieve such an ambitious goal. In this paper, we support the idea that, when challenging the WSC issue, the earlier that the inevitability of failures is recognized and proper failure-handling mechanisms are defined, from the very early stage of the composite WS (CWS) specification, the greater are the chances of achieving a significant gain in dependability. To formalize this vision, we present the FENECIA (Failure Endurable Nested-transaction based Execution of Composite Web services with Incorporated state Analysis) framework. Our framework approaches the WSC issue from different points of view to guarantee a high level of dependability. In particular, it aims at being simultaneously a failure-handling-devoted CWS specification, execution, and quality of service (QoS) assessment approach. In the first section of our framework, we focus on answering the need for a specification model tailored for the WS architecture. To this end, we introduce WS-SAGAS, a new transaction model. WS-SAGAS introduces key concepts that are not part of the WS architecture pillars, namely, arbitrary nesting, state, vitality degree, and compensation, to specify failure-endurable CWS as a hierarchy of recursively nested transactions. In addition, to define the CWS execution semantics, without suffering from the hindrance of an XML-based notation, we describe a textual notation that describes a WSC in terms of definition rules, composability rules, and ordering rules, and we introduce graphical and formal notations. These rules provide the solid foundation needed to formulate the execution semantics of a CWS in terms of execution correctness verification dependencies. To ensure dependable execution of the CWS, we present in the second section of FENECIA our architecture THROWS, in which the execution control of the resulting CWS is distributed among engines, discovered dynamically, that communicate in a peer-to-peer fashion. A dependable execution is guaranteed in THROWS by keeping track of the execution progress of a CWS and by enforcing forward and backward recovery. We concentrate in the third section of our approach on showing how the failure consideration is trivial in acquiring more accurate CWS QoS estimations. We propose a model that assesses several QoS properties of CWS, which are specified as WS-SAGAS transactions and executed in THROWS. We validate our proposal and show its feasibility and broad applicability by describing an implemented prototype and a case study.


international symposium on database applications in non traditional environments | 1999

Autonomous disks for advanced database applications

Haruo Yokota

The scalability and reliability of secondary storage systems are their most significant aspects for advanced database applications. Research on high-function disks has recently attracted a great deal of attention because technological progress now allows disk-resident data processing. This capability is not only useful for executing application programs on the disk, but is also suited for controlling distributed disks so they are scalable and reliable. We propose autonomous disks in the network environment by using the disk-resident data processing facility. A set of autonomous disks is configured as a cluster in a network, and data is distributed within the cluster, to be accessed uniformly by using a distributed directory. The disks accept simultaneous accesses from multiple hosts via a network, and handle data distribution and load skews. They are also able to tolerate disk failures and some software errors of disk controllers, and can reconfigure the cluster after the damaged disks are repaired. The data distribution, skew handling, and fault tolerance are completely transparent to hosts. The local communication means the size of the cluster is scalable. Autonomous disks are applicable to many advanced applications, such as a large Web server having many HTML files. We also propose to use rules to implement these functions, and we demonstrate their flexibility by examples of rules.


british national conference on databases | 2005

LAX: an efficient approximate XML join based on clustered leaf nodes for XML data integration

Wenxin Liang; Haruo Yokota

Recently, more and more data are published and exchanged by XML on the Internet. However, different XML data sources might contain the same data but have different structures. Therefore, it requires an efficient method to integrate such XML data sources so that more complete and useful information can be conveniently accessed and acquired by users. The tree edit distance is regarded as an effective metric for evaluating the structural similarity in XML documents. However, its computational cost is extremely expensive and the traditional wisdom in join algorithms cannot be applied easily. In this paper, we propose LAX (Leaf-clustering based Approximate XML join algorithm), in which the two XML document trees are clustered into subtrees representing independent items and the similarity between them is determined by calculating the similarity degree based on the leaf nodes of each pair of subtrees. We also propose an effective algorithm for clustering the XML document for LAX. We show that it is easily to apply the traditional wisdom in join algorithms to LAX and the join result contains complete information of the two documents. We then do experiments to compare LAX with the tree edit distance and evaluate its performance using both synthetic and real data sets. Our experimental results show that LAX is more efficient in performance and more effective for measuring the approximate similarity between XML documents than the tree edit distance.


New Generation Computing | 1984

A knowledge assimilation method for logic databases

Taizo Miyachi; Susumu Kunifuji; Hajime Kitakami; Koichi Furukawa; Akikazu Takeuchi; Haruo Yokota

In this paper we consider a deductive question-answering system for relational databases as a logic database system, and propose a knowledge assimilation method suitable for such a system. The concept of knowledge assimilation for deductive logic is constructed in an implementable form based on the notion of amalgamating object language and metalanguage. This concept calls for checks to be conducted on four subconcepts, provability, contradiction, redundancy, independency, and their corresponding internal database updates. We have implemented this logic database knowledge assimilation program in PROLOG, a logic programming language, and have found PROLOG suitable for knowledge assimilation implementation.


New Generation Computing | 1984

A relational database machine with large semiconductor disk and hardware relational algebra processor

Shigeki Shibayama; Takeo Kakuta; Nobuyoshi Miyazaki; Haruo Yokota; Kunio Murakami

This paper describes the basic concepts, design and implementation decisions, standpoints and significance of the database machine Delta in the scope of Japan’s Fifth Generation Computer Project. Delta is planned to be operational in 1985 for researchers’ use as a backend database machine for logic programming software development. Delta is basically a relational database machine system. It combines hardware facilities for efficient relational database operations, which are typically represented by relational algebra, and software which deals with hardware control and actual database management requirements. Notable features include attribute-based internal schema in accordance with the characteristics found in the relation access from logic programming environment. This is also useful for the hardware relational algebra manipulation algorithm based on merge-sorting of attributes by hardware and a large capacity Semiconductor Disk for fast access to databases. Various implementation decisions of database management requirements are made in this novel system configuration, which will be meaningful to give an example for constructing a hardware and software combination of a relational database machine. Delta is in the stage between detailed design and implementation.


international conference on data engineering | 2005

VLEI code: an efficient labeling method for handling XML documents in an RDB

Kazuhito Kobayashi; Wenxin Liang; D. Kobayashi; Akitsugu Watanabe; Haruo Yokota

A number of XML labeling methods have been proposed to store XML documents in relational databases. However, they have a vulnerable point, in insertion operations. We propose the variable length endless insertable (VLEI) code and apply it to XML labeling to reduce the cost of insertion operations. Results of our experiments indicate that a combination of the VLEI code and Dewey order is effective for handling skewed insertions.


international symposium on computer architecture | 1986

A model and an architecture for a relational knowledge base

Haruo Yokota; Hidenori Itoh

A relational knowledge base model and an architecture which manipulates the model are presented. An item stored in the relational knowledge base is called a term. A unification operation on terms in the relational knowledge base is used as the retrieval mechanism. The relational knowledge base architecture we propose consists of a number of unification engines, several disk systems, a control processor, and a multiport page-memory. The system has a knowledge compiler to support a variety of knowledge representations.


symposium on principles of database systems | 1984

An enhanced inference mechanism for generating relational algebra queries

Haruo Yokota; Susumu Kunifuji; Takeo Kakuta; Nobuyoshi Miyazaki; Shigeki Shibayama; Kunio Murakami

A system for interfacing Prolog programs with relational algebra is presented. The system produces relational algebra queries using a deferred evaluation approach. Least fixed point (LFP) queries are automatically managed. An optimization method for removing redundant relations is also presented.

Collaboration


Dive into the Haruo Yokota's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akitsugu Watanabe

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

D. Kobayashi

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yousuke Watanabe

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wenxin Liang

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Miyazaki

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Satoshi Hikida

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hieu Hanh Le

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Neila Ben Lakhal

Tokyo Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge