Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eiji Aramaki is active.

Publication


Featured researches published by Eiji Aramaki.


conference on computer supported cooperative work | 2012

Use trend analysis of twitter after the great east japan earthquake

Mai Miyabe; Asako Miura; Eiji Aramaki

After the Great East Japan Earthquake in Japan 2011, numerous tweets were exchanged on Twitter. Several studies have already pointed out that micro-blogging systems have shown potential advantages in emergency situations, but it remains unclear how people use them. This paper presents a case study of how people used Twitter after the Great East Japan Earthquake. First, we gathered tweets immediately after the earthquake and analyzed various factors, including locations. The results revealed two findings: (1) people in the disaster area tend to directly communicate with each other (reply-based tweet). On the other hand,(2) people in the other area prefer spread the information from the disaster area by using Re-tweet.


north american chapter of the association for computational linguistics | 2009

TEXT2TABLE: Medical Text Summarization System Based on Named Entity Recognition and Modality Identification

Eiji Aramaki; Yasuhide Miura; Masatsugu Tonoike; Tomoko Ohkuma; Hiroshi Mashuichi; Kazuhiko Ohe

With the rapidly growing use of electronic health records, the possibility of large-scale clinical information extraction has drawn much attention. It is not, however, easy to extract information because these reports are written in natural language. To address this problem, this paper presents a system that converts a medical text into a table structure. This systems core technologies are (1) medical event recognition modules and (2) a negative event identification module that judges whether an event actually occurred or not. Regarding the latter module, this paper also proposes an SVM-based classifier using syntactic information. Experimental results demonstrate empirically that syntactic information can contribute to the methods accuracy.


Journal of diabetes science and technology | 2012

DialBetics: Smartphone-Based Self-Management for Type 2 Diabetes Patients

Kayo Waki; Hideo Fujita; Yuji Uchimura; Eiji Aramaki; Koji Omae; Takashi Kadowaki; Kazuhiko Ohe

Self-management is a key component of diabetes therapy.1 We developed a real-time partially automated interactive system—DialBetics—to achieve diabetes self-management. This is the first system combining information technology and natural language processing (NLP) that performs real-time automated text communication with patients, increasing their convenience while minimizing health care providers’ workload and cost.2 A pilot study was conducted to assess the safety, usability, and impact of remote health-data monitoring on patient hemoglobin A1c (HbA1c) outcomes and the effect of home blood pressure monitoring as a way of managing the complications related to diabetes.3 DialBetics comprises three modules: Data transmission—patients’ blood glucose, blood pressure, body weight, and pedometer counts are measured at home and sent to the server twice a day. Evaluation—data are automatically evaluated following the Japan Diabetes Society (JDS) guideline’s targeted values—optimally, blood sugar 10,000. It determines if each reading satisfies guideline requirements then sends results to each patient’s smartphone. Abnormal readings—defined as blood sugar >400 mg/dl or 220 mm Hg—are reported as “Dr. Call,” meaning a physician will check the data and interact with the patient if necessary. Communication (a) patients’ voice/text messages about meals and exercise are sent to the server; (b) messages by voice are converted to text and matched with text in the system’s database; (c) advice on lifestyle modification, matched to the patients’ input about diet and exercise, is returned to each patient (Figure 1). Figure 1 An overview of DialBetics. For (b) and (c), the NLP-based disambiguate system allows our system to choose database words with high agreement rates for the patients’ input.4 Eleven patients who had been diagnosed with type 2 diabetes more than 5 years ago were recruited from a university hospital for a 1 month pilot study (age 61.9 ± 8.7 years, body mass index 24.6 ± 4.5 kg/m2, and HbA1c 6.79 ± 0.58%). To be eligible, patients had to be free of any severe complications—serum creatinine below 1.5 mg/dl, no proliferative retinopathy—and had to be able to exercise. Eligible patients gave consent, and the study was approved by the Institutional Review Board. DialBetics was found accurate and safe in data transmission, evaluation, and text communication for patients whose HbA1c was around 7%. All subjects were satisfied with it and were enthusiastic about receiving real-time advice on lifestyle modification. Mean HbA1c significantly decreased 0.26% after 1 month (HbA1c 6.53 ± 0.84%, p = .02). The message processing success rate was 73.6%, which can be improved by expanding the database. Notably, it showed eight patients that their mean blood pressures did not meet the JDS goal for hypertension therapy. Because no readings were defined as “Dr. Call,” a physician’s time was not required. The study suggested that not only blood glucose monitoring but also home blood pressure monitoring might improve care of diabetes by revealing uncontrolled blood pressure, which was frequently overlooked in regular hospital visits. We plan to validate these study findings in the larger randomized controlled trial with a longer duration (in progress).


north american chapter of the association for computational linguistics | 2003

Word selection for EBMT based on monolingual similarity and translation confidence

Eiji Aramaki; Sadao Kurohashi; Hideki Kashioka; Hideki Tanaka

We propose a method of constructing an example-based machine translation (EBMT) system that exploits a content-aligned bilingual corpus. First, the sentences and phrases in the corpus are aligned across the two languages, and the pairs with high translation confidence are selected and stored in the translation memory. Then, for a given input sentences, the system searches for fitting examples based on both the monolingual similarity and the translation confidence of the pair, and the obtained results are then combined to generate the translation. Our experiments on translation selection showed the accuracy of 85% demonstrating the basic feasibility of our approach.


Proceedings of the 2009 Named Entities Workshop: Shared Task on Transliteration (NEWS 2009) | 2009

Fast Decoding and Easy Implementation: Transliteration as Sequential Labeling

Eiji Aramaki; Takeshi Abekawa

Although most of previous transliteration methods are based on a generative model, this paper presents a discriminative transliteration model using conditional random fields. We regard character(s) as a kind of label, which enables us to consider a transliteration process as a sequential labeling process. This approach has two advantages: (1) fast decoding and (2) easy implementation. Experimental results yielded competitive performance, demonstrating the feasibility of the proposed approach.


meeting of the association for computational linguistics | 2007

UTH: SVM-based Semantic Relation Classification using Physical Sizes

Eiji Aramaki; Takeshi Imai; Kengo Miyo; Kazuhiko Ohe

Although researchers have shown increasing interest in extracting/classifying semantic relations, most previous studies have basically relied on lexical patterns between terms. This paper proposes a novel way to accomplish the task: a system that captures a physical size of an entity. Experimental results revealed that our proposed method is feasible and prevents the problems inherent in other methods.


international joint conference on natural language processing | 2015

Who caught a cold ? - Identifying the subject of a symptom

Shin Kanouchi; Mamoru Komachi; Naoaki Okazaki; Eiji Aramaki; Hiroshi Ishikawa

The development and proliferation of social media services has led to the emergence of new approaches for surveying the population and addressing social issues. One popular application of social media data is health surveillance, e.g., predicting the outbreak of an epidemic by recognizing diseases and symptoms from text messages posted on social media platforms. In this paper, we propose a novel task that is crucial and generic from the viewpoint of health surveillance: estimating a subject (carrier) of a disease or symptommentioned in a Japanese tweet. By designing an annotation guideline for labeling the subject of a disease/symptom in a tweet, we perform annotations on an existing corpus for public surveillance. In addition, we present a supervised approach for predicting the subject of a disease/symptom. The results of our experiments demonstrate the impact of subject identification on the effective detection of an episode of a disease/symptom. Moreover, the results suggest that our task is independent of the type of disease/symptom.


International Journal of Web Information Systems | 2010

Extracting content holes by comparing community‐type content with Wikipedia

Akiyo Nadamoto; Eiji Aramaki; Takeshi Abekawa; Yohei Murakami

Purpose – Community‐type content that are social network services and blogs are maintained by communities of people. Occasionally, community members do not understand the nature of the content from multiple perspectives, and so the volume of information is often inadequate. The authors thus consider it necessary to present users with missing information. The purpose of this paper is to search for the content “hole” where users of community‐type content missed information.Design/methodology/approach – The proposed content hole is defined as different information that is obtained by comparing community‐type content with other content, such as other community‐type content, other conventional web content, and real‐world content. The paper suggests multiple types of content holes and proposes a system that compares community‐type content with Wikipedia articles and identifies the content hole. The paper first identifies structured keywords from the community‐type content, and extracts target articles from Wiki...


Proceedings of the third International Workshop on Natural Language Processing for Social Media | 2015

Location Name Disambiguation Exploiting Spatial Proximity and Temporal Consistency

Takashi Awamura; Daisuke Kawahara; Eiji Aramaki; Tomohide Shibata; Sadao Kurohashi

As the volume of documents on the Web increases, technologies to extract useful information from them become increasingly essential. For instance, information extracted from social network services such as Twitter and Facebook is useful because it contains a lot of location-specific information. To extract such information, it is necessary to identify the location of each location-relevant expression within a document. Previous studies on location disambiguation have tackled this problem on the basis of word sense disambiguation, and did not make use of location-specific clues. In this paper, we propose a method for location disambiguation that takes advantage of the following two clues: spatial proximity and temporal consistency. We confirm the effectiveness of these clues through experiments on Twitter tweets with GPS information.


meeting of the association for computational linguistics | 2015

Disease Event Detection based on Deep Modality Analysis

Yoshiaki Kitagawa; Mamoru Komachi; Eiji Aramaki; Naoaki Okazaki; Hiroshi Ishikawa

Social media has attracted attention because of its potential for extraction of information of various types. For example, information collected from Twitter enables us to build useful applications such as predicting an epidemic of influenza. However, using text information from social media poses challenges for event detection because of the unreliable nature of user-generated texts, which often include counter-factual statements. Consequently, this study proposes the use of modality features to improve disease event detection from Twitter messages, or “tweets”. Experimental results demonstrate that the combination of a modality dictionary and a modality analyzer improves the F1-score by 3.5 points.

Collaboration


Dive into the Eiji Aramaki's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeshi Abekawa

National Institute of Informatics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge