Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chihiro Ono is active.

Publication


Featured researches published by Chihiro Ono.


Knowledge Based Systems | 2013

Twitter user profiling based on text and community mining for market analysis

Kazushi Ikeda; Gen Hattori; Chihiro Ono; Hideki Asoh; Teruo Higashino

This paper proposes demographic estimation algorithms for profiling Twitter users, based on their tweets and community relationships. Many people post their opinions via social media services such as Twitter. This huge volume of opinions, expressed in real time, has great appeal as a novel marketing application. When automatically extracting these opinions, it is desirable to be able to discriminate discrimination based on user demographics, because the ratio of positive and negative opinions differs depending on demographics such as age, gender, and residence area, all of which are essential for market analysis. In this paper, we propose a hybrid text-based and community-based method for the demographic estimation of Twitter users, where these demographics are estimated by tracking the tweet history and clustering of followers/followees. Our experimental results from 100,000 Twitter users show that the proposed hybrid method improves the accuracy of the text-based method. The proposed method is applicable to various user demographics and is suitable even for users who only tweet infrequently.


international conference on user modeling adaptation and personalization | 2009

Context-Aware Preference Model Based on a Study of Difference between Real and Supposed Situation Data

Chihiro Ono; Yasuhiro Takishima; Yoichi Motomura; Hideki Asoh

We propose a novel approach for constructing statistical preference models for context-aware recommender systems. To do so, one of the most important but difficult problems is acquiring sufficient training data in various contexts/situations. Particularly, some situations require a heavy workload to set them up or to collect subjects under those situations. To avoid this, often a large amount of data in a supposed situation is collected, i.e., a situation where the subject pretends/imagines that he/she is in a specific situation. Although there may be difference between the preference in the real situation and the supposed situation, this has not been considered in existing researches. Here, to study the difference, we collected a certain amount of corresponding data. We asked subjects the same question about preference both in the real and the supposed situation. Then we proposed a new model construction method using a difference model constructed from the correspondence data and showed the effectiveness through the experiments.


symposium on applications and the internet | 2004

Implementation and evaluation of message delegation middleware for ITS application

Gen Hattori; Chihiro Ono; Satoshi Nishiyama; Hiroki Horiuchi

There are many applications using communication between vehicles and systems on the fixed network in intelligent transportation systems (ITS). Although DSRC, wireless LAN, cellular phones, PHS, etc. can be used as communication media for vehicle-road communication, since characteristics, such as communicative area, transmission speed, communication cost, differ. The applications using vehicle-road communication need to have a function that selects one of these communication media by a certain standard. Moreover, the application also needs to have a reliable communication function, which sends a message only when the communication channel is available, since it is assumed that intermittence of a communication channel takes place frequently owing to the spot communication environment of DSRC and wireless LAN. To improve the development efficiency of applications in the DSRC network, so far, we have proposed middleware with a message delegation system that realizes reliable message delivery based on the information of the network status of the DSRC network. We describe implementation of the middleware and show the results of the middleware through evaluation.


advanced information networking and applications | 2012

Social Indexing of TV Programs: Detection and Labeling of Significant TV Scenes by Twitter Analysis

Masami Nakazawa; Maike Erdmann; Keiichiro Hoashi; Chihiro Ono

Technology to analyze the content of TV programs, especially the extraction and annotation of important scenes and events within a program, is beneficial for users to enjoy recorded programs. In this paper, we propose a method of detecting significant scenes in TV programs and automatically annotating the content of the extracted scenes through Twitter analysis. Experiments conducted on baseball games indicate that the proposed method is capable of detecting major events in a baseball game with an accuracy of 90.6%. Moreover, the names of persons involved in the events were detected with an accuracy of 87.2%, and labels describing the event were applied with an accuracy of 66.8%. The proposed technology is very helpful, because it enables users to skip to the highlights of a recorded program.


advanced information networking and applications | 2013

Early Detection Method of Service Quality Reduction Based on Linguistic and Time Series Analysis of Twitter

Kazushi Ikeda; Gen Hattori; Chihiro Ono; Hideki Asoh; Teruo Higashino

This paper proposes a method for detecting service quality reduction at an early stage based on a linguistic and time series analysis of Twitter. Recently, many people post their opinions about products and service quality via social networking services, such as Twitter. The number of tweets related to service quality increases when service quality reductions such as communication failures and train delays occur. It is crucial for the service operators to recover service quality at an early stage in order to maintain customer satisfaction. Tweets can be considered as an important clue for detecting service quality reduction. In this paper, we propose a method for early detection of service quality reduction by making the best use of the Twitter platform, which includes tweets as text information and has a feature of real time communication. The proposed method consists of a linguistic analysis and time series analysis of tweets. In the linguistic analysis, semi-automatic method is proposed to construct a service specific dictionary, which is used to extract negative tweets related to the services with high accuracy. In the time series analysis, statistical modeling is used for the early and accurate anomaly detection from the time series of the negative tweets. The experimental results show that the extraction accuracy of negative tweets and the detection accuracy of service quality reduction are significantly improved.


international conference on user modeling, adaptation, and personalization | 2011

An acceptance model of recommender systems based on a large-scale internet survey

Hideki Asoh; Chihiro Ono; Yukiko Habu; Haruo Takasaki; Takeshi Takenaka; Yoichi Motomura

Recommendation services capture and exploit personal information such as demographic attributes, preferences, and user behaviors on the internet. It is known that some users feel uneasiness regarding such information acquisition by systems and have concern over their online privacy. Investigating the structure of the uneasiness and evaluating the effect to user acceptance of the recommender systems is an important issue to develop user-accepting services. In this study, we developed an acceptance model of recommender systems based on a large-scale internet survey using 60 kinds of pseudo-services.


IEICE Transactions on Information and Systems | 2008

Context-Aware Users' Preference Models by Integrating Real and Supposed Situation Data

Chihiro Ono; Yasuhiro Takishima; Yoichi Motomura; Hideki Asoh; Yasuhide Shinagawa; Michita Imai; Yuichiro Anzai

This paper proposes a novel approach of constructing statistical preference models for context-aware personalized applications such as recommender systems. In constructing context-aware statistical preference models, one of the most important but difficult problems is acquiring a large amount of training data in various contexts/situations. In particular, some situations require a heavy workload to set them up or to collect subjects capable of answering the inquiries under those situations. Because of this difficulty, it is usually done to simply collect a small amount of data in a real situation, or to collect a large amount of data in a supposed situation, i.e., a situation that the subject pretends that he is in the specific situation to answer inquiries. However, both approaches have problems. As for the former approach, the performance of the constructed preference model is likely to be poor because the amount of data is small. For the latter approach, the data acquired in the supposed situation may differ from that acquired in the real situation. Nevertheless, the difference has not been taken seriously in existing researches. In this paper we propose methods of obtaining a better preference model by integrating a small amount of real situation data with a large amount of supposed situation data. The methods are evaluated using data regarding food preferences. The experimental results show that the precision of the preference model can be improved significantly.


pervasive computing and communications | 2003

Making Java-enabled mobile phone as ubiquitous terminal by lightweight FIPA compliant agent platform

Gen Hattori; Satoshi Nishiyama; Chihiro Ono; Hiroki Horiuchi

We discuss the design issues on lightweight and FIPA compliant agent platform for Java-enabled mobile phones and describe the design of such agent platform. This platform changes Java-enabled mobile phones to ubiquitous terminals by providing place for agent applications. Combined with location services, it can be used for various ubiquitous services. We also show the performance comparison of the prototype with LEAP, another lightweight agent platform.


international conference on data mining | 2016

Sequence-to-Sequence Model with Attention for Time Series Classification

Yujin Tang; Jianfeng Xu; Kazunori Matsumoto; Chihiro Ono

Encouraged by recent waves of successful applications of deep learning, some researchers have demonstrated the effectiveness of applying convolutional neural networks (CNN) to time series classification problems. However, CNN and other traditional methods require the input data to be of the same dimension which prevents its direct application on data of various lengths and multi-channel time series with different sampling rates across channels. Long short-term memory (LSTM), another tool in the deep learning arsenal and with its design nature, is more appropriate for problems involving time series such as speech recognition and language translation. In this paper, we propose a novel model incorporating a sequence-to-sequence model that consists two LSTMs, one encoder and one decoder. The encoder LSTM accepts input time series of arbitrary lengths, extracts information from the raw data and based on which the decoder LSTM constructs fixed length sequences that can be regarded as discriminatory features. For better utilization of the raw data, we also introduce the attention mechanism into our model so that the feature generation process can peek at the raw data and focus its attention on the part of the raw data that is most relevant to the feature under construction. We call our model S2SwA, as the short for Sequence-to-Sequence with Attention. We test S2SwA on both uni-channel and multi-channel time series datasets and show that our model is competitive with the state-of-the-art in real world tasks such as human activity recognition.


pacific rim international conference on artificial intelligence | 2012

Hierarchical training of multiple SVMs for personalized web filtering

Maike Erdmann; Duc-Dung Nguyen; Tomoya Takeyoshi; Gen Hattori; Kazunori Matsumoto; Chihiro Ono

The abundance of information published on the Internet makes filtering of hazardous Web pages a difficult yet important task. Supervised learning methods such as Support Vector Machines can be used to identify hazardous Web content. However, scalability is a big challenge, especially if we have to train multiple classifiers, since different policies exist on what kind of information is hazardous. We therefore propose a transfer learning approach called Hierarchical Training for Multiple SVMs. HTMSVM identifies common data among similar training sets and trains the common data sets first, in order to obtain initial solutions. These initial solutions then reduce the time for training the individual training sets without influencing classification accuracy. In an experiment, in which we trained five Web content filters with 80% of common and 20% of inconsistently labeled training examples, HTMSVM was able to predict hazardous Web pages with a training time of only 26% to 41% compared to LibSVM, but the same classification accuracy (more than 91%).

Collaboration


Dive into the Chihiro Ono's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hideki Asoh

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge