Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tsuyoshi Sugibuchi is active.

Publication


Featured researches published by Tsuyoshi Sugibuchi.


international world wide web conferences | 2005

Interactive web-wrapper construction for extracting relational information from web documents

Tsuyoshi Sugibuchi; Yuzuru Tanaka

In this paper, we propose a new user interface to interactively specify Web wrappers to extract relational information from Web documents. In this study, we focused on improving users trial-and-error repetitions for constructing a wrapper. Our approach is a combination of a light-weight wrapper construction method and the dynamic previewing interface which quickly previews how generated wrapper works. We adopted a simple algorithm which can construct a Web wrapper from given extraction examples in less than 100 milliseconds. By using the algorithm, our system dynamically generates a new wrapper from a stream of users mouse events for specifying extraction examples, and immediately updates a preview result that shows how the generated wrapper extracts HTML nodes from a source Web document. Through this animated display, a user can make a lot of wrapper construction trials with various different combinations of extraction examples by only moving a mouse on the Web document, and reach a good set of examples to obtain an intended wrapper in a short time.


international syposium on methodologies for intelligent systems | 2009

Analyses of Knowledge Creation Processes Based on Different Types of Monitored Data

Jan Paralic; František Babič; Jozef Wagner; Ekaterina Simonenko; Nicolas Spyratos; Tsuyoshi Sugibuchi

This paper presents specialized methods for analyzing knowledge creation processes and knowledge practices that are (at least partially) projected in work within a virtual collaborative environment. Support for such analysis and evaluation of knowledge creation processes is provided by historical data stored in a virtual working environment, and describes various aspects of the monitored processes (e.g. semantic information, content, log of activities, etc.). The proposed analytical methods cover different types of analysis, such as (a) statistic analysis, that provides information about processes, and the possibility to visualize such information based on user-selected presentation modes; (b) time-line based analysis that supports visualization of the real process execution with all relevant information, including the possibility to identify and further analyze working patterns in knowledge creation processes (projection of knowledge practices). Experimental evaluation of the proposed methods is carried out within the IST EU project called KP-Lab.


Journal on Data Semantics | 2014

A Model for Digital Libraries and its Translation to RDF

Carlo Meghini; Nicolas Spyratos; Tsuyoshi Sugibuchi; Jitao Yang

With the advent of the Web, the traditional concept of library has undergone a profound change: from a collection of physical information resources (mostly books) to a collection of digital resources. In addition, the notion of digital resource includes not only texts in digital form, but also, in general, any kind of multimedia resources. In a traditional library, physical information resources are managed through well-understood manual procedures, whereas in a digital library digital resources are organized according to a data model, discovered through a query language and managed in a highly automated way. In this paper, we present a data model and query language for digital libraries supporting identification, structuring, metadata support, re-use and discovery of digital resources. The model that we propose is inspired by the Web and it is formalized as a first-order theory, certain models of which correspond to the notion of digital library. Additionally, we provide a full translation of the model in RDF and of the query language in SPARQL.


2009 13th International Conference Information Visualisation | 2009

A Framework to Analyze Information Visualization Based on the Functional Data Model

Tsuyoshi Sugibuchi; Nicolas Spyratos; Ekaterina Siminenko

We propose a framework for analyzing information visualization (infovis) based on the concept of Functional Dependency (FD). Although functional dependencies express important semantic information of data, they are rarely taken into account by general purpose infovis tools -- a fact that may cause problems in the visualization process. The main idea of our approach is to use the concept of FD for modeling the invariant structures of all three components of information visualization that is data, visual representations, and visual mappings.


discovery science | 2001

Component-Based Framework for Virtual Information Materialization

Yuzuru Tanaka; Tsuyoshi Sugibuchi

Various research fields in science and technology are now accumulating large amounts of data in databases, using recently developed computer controlled efficient data-acquisition tools for measurement, analysis, and observation. Researchers believe that such a huge extensive data accumulation in databases will allow them to simulate various physical, chemical, and/or biological phenomena on computers without carrying out any time-consuming and/or expensive real experiments. Information visualization for DB-based simulation requires each visualized record to work as an interactive object. Current information visualization systems visualize records without materializing them as interactive objects. Researchers in these fields develop their individual or community mental models on their target phenomena, and often like to visualize information based on their own mental models. We will propose in this paper a generic framework for developing virtual materialization of database records based on the component-ware architecture IntelligentBox that we developed in 1995 for 3D applications. This framework provides visual interactive components for (1) accessing databases, (2) specifying and modifying database queries, (3) defining an interactive 3D object as a template to materialize each record in a virtual pace, and (4) defining a virtual space and its coordinate system for the information materialization. These components are represented as boxes, i.e., components in the IntelligentBox architecture.


adaptive hypermedia conference | 2001

Meme Media and Meme Pools for Re-editing and Redistributing Intellectual Assets

Yuzuru Tanaka; Jun Fujima; Tsuyoshi Sugibuchi

While the current Web technologies have allowed us to publish intellectual assets in world-wide repositories, to browse their huge accumulation, and to download some assets, we have no good tools yet to flexibly re-edit and redistribute such intellectual assets for their reuse in different contexts. This paper reviews the IntelligentPad and Intelligent-Box meme media architectures together with their potential applications, proposes both the use of XML/XSL or XHTML to define pads, and a world-wide web of meme pools. When applied to Web documents and services, meme media and meme pool technologies allow us to annotate Web pages with any types of meme media objects, and to extract and make any portions of them work as meme media objects. We can functionally combine such extracted meme media objects with each other or with other application meme media objects, and redistribute composite meme media objects for their further reuses by other people.


italian research conference on digital library management systems | 2012

Metadata Inference for Description Authoring in a Document Composition Environment

Tsuyoshi Sugibuchi; Ly Anh Tuan; Nicolas Spyratos

In this paper, we propose a simple model for metadata management in a document composition environment. Our model considers (1) composite documents in the form of trees, whose nodes are either atomic documents, or other composite documents, and (2) metadata or descriptions of documents in the form of sets of terms taken from a taxonomy. We present a formal definition of our model and several concepts of inferred descriptions. Inferred descriptions can be used for term suggestion that allows users to easily define and manage document descriptions by taking into account what we call soundness of descriptions.


theory and practice of digital libraries | 2011

Design, implementation and evaluation of a user generated content service for Europeana

Nicola Aloia; Cesare Concordia; Anne Marie van Gerwen; Preben Hansen; Micke Kuwahara; Anh Tuan Ly; Carlo Meghini; Nicolas Spyratos; Tsuyoshi Sugibuchi; Yuzuru Tanaka; Jitao Yang; Nicola Zeni

The paper presents an overview of the user generated content service that the ASSETS Best Practice Network is designing, implementing and evaluating with the user for Europeana, the European digital library. The service will allow Europeana users to contribute to the contents of the digital library in several different ways, such as uploading simple media objects along with their descriptions, annotating existing objects, or enriching existing descriptions. The user and the system requirements are outlined first, and used to derive the basic principles underlying the service. A conceptual model of the entities required for the realization of the service and a general sketch of the system architecture are also given, and used to illustrate the basic workflow of some important operations. The planning of the user evaluation is finally presented, aimed at validating the service before making it available to the final users.


Archive | 2012

Mirroring Tools for CollaborativeAnalysis and Reflection

Christoph Richter; Ekaterina Simonenko; Tsuyoshi Sugibuchi; Nicolas Spyratos; František Babič; Jozef Wagner; Jan Paralic; Michal Racek; Crina DamŞa; Vassilis Christophides

Analysing and reflecting on one’s own and other’s working practices is an essential meta-activity for any kind of project work, but also plays a prominent role in the type of object-oriented inquiry and trialogical learning the KP-Lab project is focusing on. Analysis and reflection, therefore, are not understood just as means to optimize or improve a given way of working but also as an active and productive process, geared towards the advancement of knowledge practices.


International Workshop on Information Search, Integration, and Personalization | 2012

Parallelism and Rewriting for Big Data Processing

Nicolas Spyratos; Tsuyoshi Sugibuchi

The so called “big data” is increasingly present in several modern applications, in which massive parallel processing is the main approach in order to achieve acceptable performance. However, as the size of data is ever increasing, even parallelism will meet its limits unless it is combined with other powerful processing techniques. In this paper we propose to combine parallelism with rewriting, that is reusing previous results stored in a cache in order to perform new (parallel) computations. To do this, we introduce an abstract framework based on the lattice of partitions of the data set. Our basic contributions are: (a) showing that our framework allows rewriting of parallel computations (b) deriving the basic principles of optimal cache management and (c) showing that, in case of structured data, our approach can leverage both structure and semantics in data to improve performance.

Collaboration


Dive into the Tsuyoshi Sugibuchi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jitao Yang

University of Paris-Sud

View shared research outputs
Top Co-Authors

Avatar

František Babič

Technical University of Košice

View shared research outputs
Top Co-Authors

Avatar

Jan Paralic

Technical University of Košice

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jozef Wagner

Technical University of Košice

View shared research outputs
Top Co-Authors

Avatar

Carlo Meghini

Istituto di Scienza e Tecnologie dell'Informazione

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge