Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takeshi Abekawa is active.

Publication


Featured researches published by Takeshi Abekawa.


meeting of the association for computational linguistics | 2006

Japanese Dependency Parsing Using Co-Occurrence Information and a Combination of Case Elements

Takeshi Abekawa; Manabu Okumura

In this paper, we present a method that improves Japanese dependency parsing by using large-scale statistical information. It takes into account two kinds of information not considered in previous statistical (machine learning based) parsing methods: information about dependency relations among the case elements of a verb, and information about co-occurrence relations between a verb and its case element. This information can be collected from the results of automatic dependency parsing of large-scale corpora. The results of an experiment in which our method was used to rerank the results obtained using an existing machine learning based parsing method showed that our method can improve the accuracy of the results obtained using the existing method.


Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009) | 2009

Fast Decoding and Easy Implementation: Transliteration as Sequential Labeling

Eiji Aramaki; Takeshi Abekawa

Although most of previous transliteration methods are based on a generative model, this paper presents a discriminative transliteration model using conditional random fields. We regard character(s) as a kind of label, which enables us to consider a transliteration process as a sequential labeling process. This approach has two advantages: (1) fast decoding and (2) easy implementation. Experimental results yielded competitive performance, demonstrating the feasibility of the proposed approach.


International Journal of Web Information Systems | 2010

Extracting content holes by comparing community‐type content with Wikipedia

Akiyo Nadamoto; Eiji Aramaki; Takeshi Abekawa; Yohei Murakami

Purpose – Community‐type content that are social network services and blogs are maintained by communities of people. Occasionally, community members do not understand the nature of the content from multiple perspectives, and so the volume of information is often inadequate. The authors thus consider it necessary to present users with missing information. The purpose of this paper is to search for the content “hole” where users of community‐type content missed information.Design/methodology/approach – The proposed content hole is defined as different information that is obtained by comparing community‐type content with other content, such as other community‐type content, other conventional web content, and real‐world content. The paper suggests multiple types of content holes and proposes a system that compares community‐type content with Wikipedia articles and identifies the content hole. The paper first identifies structured keywords from the community‐type content, and extracts target articles from Wiki...


Archive | 2013

The Place of Comparable Corpora in Providing Terminological Reference Information to Online Translators: A Strategic Framework

Kyo Kageura; Takeshi Abekawa

This paper examines the status of comparable corpora as potential terminological resources with special reference to the applicational framework of helping online translators. In the past 15 years, we have witnessed great advances in bilingual term extraction technologies based both on parallel and comparable corpora. The use of comparable corpora is widely held to be especially important, because not many parallel corpora are available in many language pairs. However, human language practitioners, including online translators, do not make much use of terminological resources constructed using automatic methods; there seems to be a gap between what can be provided through corpus-based automatic extraction methods and what translators actually require. Against this backdrop, this paper first clarifies online translators’ requirements for terminology resources. Based on this clarification, the paper examines what should be taken into account in the use of comparable corpora for bilingual term extraction if the resultant terminology resources are to be really used by translators. The discussion in this paper is deductive rather than empirical, based on the authors’ experience in talking with online translators in the course of developing the integrated translation-hosting and translation-aid site Minna no Hon’yaku (MNH: translation of/by/for all) since 2005 (the site has been open to the public since April 2009).


international universal communication symposium | 2009

QRpotato: a system that exhaustively collects bilingual technical term pairs from the web

Takeshi Abekawa; Kyo Kageura

This paper reports the system QRpotato, which exhaustively collects bilingual technical term pairs from the Web. The system uses bilingual (Japanese-English) term pairs taken from existing terminological dictionary as seed pairs, search Web pages using the seed pairs, and extract bilingual term pair candidates from the retrieved Web pages, using relational patterns identified between seed term pairs. We have successfully collected about 2.2 million different term pair candidates by using about 210,000 seed term pairs. The manual evaluation of the parts of the candidates shows the effectiveness of the method.


hawaii international conference on system sciences | 2011

Gist of a Thread in Social Network Services Based on Credibility of Wikipedia

Akiyo Nadamoto; Yu Suzuki; Takeshi Abekawa

Users of Social Network Services(SNS) can sometimes enter into heated discussions, which prompt those users to concentrate on a single issue and lose track of the actual theme. We believe that it would be beneficial for users and visitors to present information to help understand the gist of the discussion at a glance. As described in this paper, we propose a system that presents a gist of a thread on an SNS and basic information about it by comparing the comments in the thread with Wikipedia article. Wikipedia articles, however, are not always credible. When we compare a thread on an SNS with Wikipedia, the Wikipeida article must have credible content. We measure the credibility of the article based on the credibility of Editors. We first extract the target passage which is candidate of the gist of a thread in an SNS based on the Wikipedia Table of Contents(TOC). Then we measure the credibility of editors of Wikipedia using the edit history and measure the credibility of the article using results of the credibility of editors. The target passage, which has a high similarity degree with comment in an SNS and has a high credibility rate becomes the gist of the thread in SNS. Consequently, users and viewers can ascertain the gist of an SNS thread by viewing a Wikipedia TOC with credibility.


International Journal of Business Data Communications and Networking | 2011

Acquiring the Gist of Social Network Service Threads via Comparison with Wikipedia

Akiyo Nadamoto; Eiji Aramaki; Takeshi Abekawa; Yohei Murakami

Internet-based social network services SNSs have grown increasingly popular and are producing a great amount of content. Multiple users freely post their comments in SNS threads, and extracting the gist of these comments can be difficult due to their complicated dialog. In this paper, the authors propose a system that explores this concept of the gist of an SNS thread by comparing it with Wikipedia. The granularity of information in an SNS thread differs from that in Wikipedia articles, which implies that the information in a thread may be related to different articles on Wikipedia. The authors extract target articles on Wikipedia based on its link graph. When an SNS thread is compared with Wikipedia, the focus is on the table of contents TOC of the relevant Wikipedia articles. The system uses a proposed coverage degree to compare the comments in a thread with the information in the TOC. If the coverage degree is higher, the Wikipedia paragraph becomes the gist of the thread.


information integration and web-based applications & services | 2010

Extracting the gist of social network services using Wikipedia

Akiyo Nadamoto; Eiji Aramaki; Takeshi Abekawa; Yohei Murakami

Social Network Services(SNSs), which are maintained by a community of people, are among the popular Web 2.0 tools. Multiple users freely post their comments to an SNS thread. It is difficult to understand the gist of the comments because the dialog in an SNS thread is complicated. In this paper, we propose a system that presents the gist of information at a glance and basic information about an SNS thread by using Wikipedia. We focus on the table of contents (TOC) of the relevant articles on Wikipedia. Our system compares the comments in a thread with the information in the TOC and identifies contents that are similar. We consider the similar contents in the TOC as the gist of the thread and paragraphs in Wikipedia similar to the comments in the thread as comprising basic information about the thread. Thus, a user can obtain the gist of an SNS thread by viewing a table with similar contents.


database systems for advanced applications | 2010

Outline of community-type content based on wikipedia

Akiyo Nadamoto; Eiji Aramaki; Takeshi Abekawa; Yohei Murakami

It is difficult to understand the outline of community-type content such as Blog, Social Network Services(SNS), and Bulletin Board System(BBS) because multiple users post content freely. In this paper, we have developed a system that presents the outline of community-type content by using Wikipedia. We focus on the table of contents (TOC) collected from Wikipedia. Our system compares the comments in a thread with the information in the TOC obtained from Wikipedia and identifies contents that are similar. Thus, the user can understand the outline of community-type content when he/she views a table with similar contents.


Archive | 2010

Extracting Neglected Content from Community-type-content

Akiyo Nadamoto; Eiji Aramaki; Takeshi Abekawa; Yohei Murakami

In online community–type content such as that in social network services (SNSs) and blogs, users occasionally do not understand the theme of the content from multiple viewpoints, and hence, much of the information is often lost. Because the discussion in a community is concentrated on particular topics, the users’ viewpoint becomes narrow. We believe that it is necessary to provide the users of a community with information on the lack of awareness of a user. The information that a user is unaware of is called as a “content hole,” and the search for such holes is called a “content hole search.” In this paper, we define a type of content-hole search on the basis of viewpoints. Our proposed viewpoints are coverage, detail, semantics, and reputation. Furthermore, as a first step toward developing a search technique for content holes, on the basis of an “isolated degree” and a “non-related degree,” we attempt to extract neglected content from online community–type content and then represent this neglected content. This neglected content consists of information that is not of interest to anybody from within the community but may be of interest to many people outside the community.

Collaboration


Dive into the Takeshi Abekawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Manabu Okumura

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eiichiro Sumita

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Masao Utiyama

National Institute of Information and Communications Technology

View shared research outputs
Top Co-Authors

Avatar

Akiko Aizawa

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar

Bor Hodošček

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hidetsugu Nanba

Hiroshima City University

View shared research outputs
Researchain Logo
Decentralizing Knowledge