Takayuki Osogami
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Takayuki Osogami.
international conference on pattern recognition | 2016
Sakyasingha Dasgupta; Takayuki Yoshizumi; Takayuki Osogami
We introduce Delay Pruning, a simple yet powerful technique to regularize dynamic Boltzmann machines (DyBM). The recently introduced DyBM provides a particularly structured Boltzmann machine, as a generative model of a multi-dimensional time-series. This Boltzmann machine can have infinitely many layers of units but allows exact inference and learning based on its biologically motivated structure. DyBM uses the idea of conduction delays in the form of fixed length first-in first-out (FIFO) queues, with a neuron connected to another via this FIFO queue, and spikes from a pre-synaptic neuron travel along the queue to the post-synaptic neuron with a constant period of delay. Here, we present Delay Pruning as a mechanism to prune the lengths of the FIFO queues (making them zero) by setting some delay lengths to one with a fixed probability, and finally selecting the best performing model with fixed delays. The uniqueness of structure and a non-sampling based learning rule in DyBM, make the application of previously proposed regularization techniques like Dropout or DropConnect difficult, leading to poor generalization. First, we evaluate the performance of Delay Pruning to let DyBM learn a multidimensional temporal sequence generated by a Markov chain. Finally, we show the effectiveness of delay pruning in learning high dimensional sequences using the moving MNIST dataset, and compare it with Dropout and DropConnect methods.
international conference on pattern recognition | 2014
Takayuki Osogami; Takayuki Katsuki
We extend the standard choice model of multinomial logit model (MLM) into a hierarchical Bayesian model to simultaneously estimate the preferences of customers and the visibility of items from purchasing history. We say that an item has high visibility when customers well consider that item as a candidate before making a choice. We design two algorithms for estimating the parameters of the proposed choice model. One algorithm estimates the posterior distribution with the Gibbs sampling, and the other approximately performs the maximum a posteriori estimation. Our experimental results show that we can estimate the preferences of customers from their purchasing history without the prior knowledge of the choice set. The existing approaches to estimating the preferences of customers rely on the explicit knowledge of the choice set.
Archive | 2017
Takayuki Osogami
The choice made by humans is known to depend on available alternatives in rather complex but systematic ways. There has been a significant amount of work on choice models for modeling such human choice. Most of the existing choice models, particularly those in the class of random utility models, however, cannot represent one of the typical phenomena of human choice, known as the attraction effect . Here, we review recent development of choice models that can be trained to learn the attraction effect and other typical phenomena of human choice from the data of the choice made by humans. We also discuss possible extensions of such work on choice models, which suggest potential directions of future research.
neural information processing systems | 2014
Takayuki Osogami; Makoto Otsuka
arXiv: Neural and Evolutionary Computing | 2015
Takayuki Osogami; Makoto Otsuka
international conference on machine learning | 2017
Takayuki Osogami; Hiroshi Kajino; Taro Sekiyama
national conference on artificial intelligence | 2016
Makoto Otsuka; Takayuki Osogami
arXiv: Neural and Evolutionary Computing | 2016
Takayuki Osogami
international conference on machine learning | 2015
Takayuki Osogami
national conference on artificial intelligence | 2014
Tetsuro Morimura; Takayuki Osogami; Tomoyuki Shirai