Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Satoshi Oyama is active.

Publication


Featured researches published by Satoshi Oyama.


international quantum electronics conference | 2013

Experimental demonstration of adaptive quantum state estimation

Ryo Okamoto; Minako Iefuji; Satoshi Oyama; Koichi Yamagata; Hiroshi Imai; Akio Fujiwara; Shigeki Takeuchi

Summary form only given. Quantum theory is inherently statistical. This entails repetition of experiments over a number of identically prepared quantum objects, for example, quantum states, if one wants to know the “true state” or the “true value” of the parameter that specifies the quantum state. Such an estimation procedure is particularly important for quantum communication and quantum computation, and is also indispensable to quantum metrology [1,2]. In applications, one needs to design the estimation procedure in such a way that the estimated value of the parameter should be close to the true value (consistency), and that the uncertainty of the estimated value should be as small as possible (efficiency) for a given limited number of samples. In order to realize these requirements, Nagaoka advocated an adaptive quantum state estimation (AQSE) procedure [3], and recently Fujiwara proved the strong consistency and asymptotic efficiency for AQSE [4]. In this paper, we report the first experimental demonstration of AQSE using photons[5]. The angle of a half wave plate (HWP) that initializes the linear polarization of input photons is estimated using AQSE (Fig. 1). sequence of AQSE is carried out with 300 input photons, and the sequence is repeated 500 times for four different settings of HWP. The statistical analysis of these results verifies the strong consistency and asymptotic efficiency of AQSE. Recently, it has been mathematically proven that the precision of AQSE outperforms the conventional state tomography [6]. It is thus expected that AQSE will provide a useful methodology in the broad area of quantum information processing, communication, and metrology. This work was supported in part by JSPS Quantum Cybernetics, JSPS-Kakenhi, JST-CREST, FIRST Program, Special Coordination Funds for Promoting Science and Technology, Research Foundation for Opto-Science and Technology.


Expert Systems With Applications | 2014

Separate or joint? Estimation of multiple labels from crowdsourced annotations

Lei Duan; Satoshi Oyama; Haruhiko Sato; Masahito Kurihara

Abstract Artificial intelligence techniques aimed at more naturally simulating human comprehension fit the paradigm of multi-label classification. Generally, an enormous amount of high-quality multi-label data is needed to form a multi-label classifier. The creation of such datasets is usually expensive and time-consuming. A lower cost way to obtain multi-label datasets for use with such comprehension–simulation techniques is to use noisy crowdsourced annotations. We propose incorporating label dependency into the label-generation process to estimate the multiple true labels for each instance given crowdsourced multi-label annotations. Three statistical quality control models based on the work of Dawid and Skene are proposed. The label-dependent DS ( D-DS ) model simply incorporates dependency relationships among all labels. The label pairwise DS ( P-DS ) model groups labels into pairs to prevent interference from uncorrelated labels. The Bayesian network label-dependent DS ( ND-DS ) model compactly represents label dependency using conditional independence properties to overcome the data sparsity problem. Results of two experiments, “affect annotation for lines in story” and “intention annotation for tweets”, show that (1) the ND-DS model most effectively handles the multi-label estimation problem with annotations provided by only about five workers per instance and that (2) the P-DS model is best if there are pairwise comparison relationships among the labels. To sum up, flexibly using label dependency to obtain multi-label datasets is a promising way to reduce the cost of data collection for future applications with minimal degradation in the quality of the results.


Expert Systems With Applications | 2016

Fine-tuning deep convolutional neural networks for distinguishing illustrations from photographs

Gota Gando; Taiga Yamada; Haruhiko Sato; Satoshi Oyama; Masahito Kurihara

Automatically detecting illustrations is needed for the target system.Deep Convolutional Neural Networks have been successful in computer vision tasks.DCNN with fine-tuning outperformed the other models including handcrafted features. Systems for aggregating illustrations require a function for automatically distinguishing illustrations from photographs as they crawl the network to collect images. A previous attempt to implement this functionality by designing basic features that were deemed useful for classification achieved an accuracy of only about 58%. On the other hand, deep neural networks had been successful in computer vision tasks, and convolutional neural networks (CNNs) had performed good at extracting such useful image features automatically. We evaluated alternative methods to implement this classification functionality with focus on deep neural networks. As the result of experiments, the method that fine-tuned deep convolutional neural network (DCNN) acquired 96.8% accuracy, outperforming the other models including the custom CNN models that were trained from scratch. We conclude that DCNN with fine-tuning is the best method for implementing a function for automatically distinguishing illustrations from photographs.


international world wide web conferences | 2015

Assessment of Tweet Credibility with LDA Features

Jun Ito; Jing Song; Hiroyuki Toda; Yoshimasa Koike; Satoshi Oyama

With the fast development of Social Networking Services (SNS) such as Twitter, which enable users to exchange short messages online, people can get information not only from the traditional news media but also from the masses of SNS users. However, SNS users sometimes propagate spurious or misleading information, so an effective way to automatically assess the credibility of information is required. In this paper, we propose methods to assess information credibility on Twitter, methods that utilize the tweet topic and user topic features derived from the Latent Dirichlet Allocation (LDA) model. We collected two thousand tweets labeled by seven annotators each, and designed effective features for our classifier on the basis of data analysis results. An experiment we conducted showed a 3% improvement in Area Under Curve (AUC) scores compared with existing methods, leading us to conclude that using topical features is an effective way to assess tweet credibility.


Expert Systems With Applications | 2013

Affect analysis in context of characters in narratives

Michal Ptaszynski; Hiroaki Dokoshi; Satoshi Oyama; Rafal Rzepka; Masahito Kurihara; Kenji Araki; Yoshio Momouchi

This paper presents our research in text-based affect analysis (AA) of narratives. AA represents a task of estimating or recognizing emotions elicited by a certain semiotic modality. In text-based AA the modality in focus is the textual representation of language. In this research we study particularly one type of language realization, namely narratives (e.g., stories, fairy tales, etc.). Affect analysis within the context of narratives is a challenging task because narratives are created of different kinds of sentences (descriptions, dialogs, etc.). Moreover, different characters become subjects of different emotional expressions in different parts of narratives. In this research we address the problem of person/character related affect recognition in narratives. We propose a method for emotion subject extraction from a sentence based on analysis of anaphoric expressions and compare two methods for affect analysis. We evaluate the system and discuss its possible future improvements.


systems, man and cybernetics | 2014

Transfer learning based on the observation probability of each attribute

Masahiro Suzuki; Haruhiko Sato; Satoshi Oyama; Masahito Kurihara

Machine learning is the basis of important advances in artificial intelligence. Unlike the general methods of machine learning, which use the same tasks for training and testing, the method of transfer learning uses different tasks to learn a new task. Among the various transfer learning algorithms in the literature, we focus on the attribute-based transfer learning. This algorithm realizes transfer learning by introducing attributes and transferring the results of training to another task with the common attributes. However, the existing method does not consider the frequency in which each attribute appears in feature vectors (called the observation probability). In this paper, we present a generative model with the observation probability. By the experiments, we show that the proposed method has achieved a higher accuracy rate than the existing method. Moreover, we see that it makes possible the incremental learning that was impossible in the existing method.


ieee international conference on data science and advanced analytics | 2015

From one star to three stars: Upgrading legacy open data using crowdsourcing

Satoshi Oyama; Yukino Baba; Ikki Ohmukai; Hiroaki Dokoshi; Hisashi Kashima

Despite recent open data initiatives in many countries, a significant percentage of the data provided is in non-machine-readable formats like image format rather than in a machine-readable electronic format, thereby restricting their usability. This paper describes the first unified framework for converting legacy open data in image format into a machine-readable and reusable format by using crowdsourcing. Crowd workers are asked not only to extract data from an image of a chart but also to reproduce the chart objects in spreadsheets. The properties of the reconstructed chart objects give their data structures including series names and values, which are useful for automatic processing of data by computer. Since results produced by crowdsourcing inherently contain errors, a quality control mechanism was developed that improves the accuracy of extracted tables by aggregating tables created by different workers for the same chart image and by utilizing the data structures obtained from the reproduced chart objects. Experimental results demonstrated that the proposed framework and mechanism are effective.


conference on information and knowledge management | 2010

Search as if you were in your home town: geographic search by regional context and dynamic feature-space selection

Makoto Kato; Hiroaki Ohshima; Satoshi Oyama; Katsumi Tanaka

We propose a query-by-example geographic object search method for users that do not know well about the place they are in. Geographic objects, such as restaurants, are often retrieved using an attribute-based or keyword query. These queries, however, are difficult to use for users that have little knowledge on the place where they want to search. The proposed query-by-example method allows users to query by selecting examples in familiar places for retrieving objects in unfamiliar places. One of the challenges is to predict an effective distance metric, which varies for individuals. Another challenge is to calculate the distance between objects in heterogeneous domains considering the feature gap between them, for example, restaurants in Japan and China. Our proposed method is used to robustly estimate the distance metric by amplifying the difference between selected and non-selected examples. By using the distance metric, each object in a familiar domain is evenly assigned to one in an unfamiliar domain to eliminate the difference between those domains. We developed a restaurant search using data obtained from a Japanese restaurant Web guide to evaluate our method.


international conference on ubiquitous information management and communication | 2014

Learning an accurate entity resolution model from crowdsourced labels

Jingjing Wang; Satoshi Oyama; Masahito Kurihara; Hisashi Kashima

We investigated the use of supervised learning methods that use labels from crowd workers to resolve entities. Although obtaining labeled data by crowdsourcing can reduce time and cost, it also brings challenges (e.g., coping with the variable quality of crowd-generated data). First, we evaluated the quality of crowd-generated labels for actual entity resolution data sets. Then, we evaluated the prediction accuracy of two machine learning methods that use labels from crowd workers: a conventional LPP method using consensus labels obtained by majority voting and our proposed method that combines multiple Laplacians directly by using crowdsourced data. We discussed the relationship between the accuracy of workers labels and the prediction accuracy of the two methods.


knowledge discovery and data mining | 2012

Incremental set recommendation based on class differences

Yasuyuki Shirai; Koji Tsuruma; Yuko Sakurai; Satoshi Oyama; Shin-ichi Minato

In this paper, we present a set recommendation framework that proposes sets of items, whereas conventional recommendation methods recommend each item independently. Our new approach to the set recommendation framework can propose sets of items on the basis on the users initially chosen set. In this approach, items are added to or deleted from the initial set so that the modified set matches the target classification. Since the data sets created by the latest applications can be quite large, we use ZDD (Zero-suppressed Binary Decision Diagram) to make the searching more efficient. This framework is applicable to a wide range of applications such as advertising on the Internet and healthy life advice based on personal lifelog data.

Collaboration


Dive into the Satoshi Oyama's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge