Tatsuto Murayama
RIKEN Brain Science Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tatsuto Murayama.
Physical Review E | 2004
Tatsuto Murayama
We study an ill-posed linear inverse problem, where a binary sequence will be reproduced using a sparse matrix. According to the previous study, this model can theoretically provide an optimal compression scheme for an arbitrary distortion level, though the encoding procedure remains an NP-complete problem. In this paper, we focus on the consistency condition for a dynamics model of Markov-type to derive an iterative algorithm, following the steps of Thouless-Anderson-Palmer approach. Numerical results show that the algorithm can empirically saturate the theoretical limit for the sparse construction of our codes, which is also very close to the rate-distortion function.
Journal of Physics A | 2002
Tatsuto Murayama
We analyse the performance of a linear code used for data compression of a Slepian-Wolf type. In our framework, two correlated data are separately compressed into codewords employing Gallager-type codes and cast into a communication network through two independent input terminals. At the output terminal, the received codewords are jointly decoded by a practical algorithm based on the Thouless-Anderson-Palmer approach. Our analysis shows that the achievable rate region presented in the data compression theorem is described as first-order phase transitions among several phases. The typical performance of the practical decoder is also well evaluated by the replica method.
Journal of Physics A | 2003
Tatsuto Murayama; Masato Okada
We apply statistical mechanics to an inverse problem of linear mapping to investigate the physics of the irreversible compression. We use the replica symmetry breaking (RSB) technique with a toy model to demonstrate the Shannon result. The rate distortion function, which is widely known as the theoretical limit of the compression with a fidelity criterion, is derived using the Parisi one step RSB scheme. The bound cannot be achieved in the sparsely-connected systems, where suboptimal solutions dominate the capacity.
EPL | 2009
Tatsuto Murayama; Peter Davis
Aggregation of noisy observations involves a difficult tradeoff between observation quality, which can be increased by increasing the number of observations, and aggregation quality which decreases if the number of observations is too large. We clarify this behavior for a prototypical system in which arbitrarily large numbers of observations exceeding the system capacity can be aggregated using lossy data compression. We show the existence of a scaling relation between the collective error and the system capacity, and show that large-scale lossy aggregation can outperform lossless aggregation above a critical level of observation noise. Further, we show that universal results for scaling and critical value of noise can be obtained when the system capacity increases toward infinity.
international symposium on information theory | 2004
Tatsuto Murayama
We study an ill-posed linear inverse problem, where a binary sequence will be reproduced using a sparse matrix of LDPC type. We propose an iterative algorithm based on belief propagation technique. Numerical results show that the algorithm can empirically saturate the theoretical limit for the sparse construction of our codes, which also is very close to the rate-distortion function itself.
international symposium on information theory | 2005
Tatsuto Murayama
This paper provides the asymptotic analysis for the sparse matrix codes in the CEO problem. In this problem, a firms chief executive officer (CEO) is interested in the data sequence which cannot be observed directly. Therefore, the CEO deploys a team of L agents who encodes his/her noisy observation of the data sequence without sharing any information. The CEO then collects all the L codeword sequences to recover the data sequence, where the combined data rate R at which the agents can communicate with the CEO is limited. In our scenario, each agent is supposed to use his/her LDPC-like code for lossy compression, while the CEO estimates each data bit by a majority vote of the L reproductions. The replica ansatz and the central limit theorem allow us to derive an analytical description of the problem in the case of large L. Here, the expected error frequency can be numerically evaluated for a given R, indicating that the optimum decentralization strategy depends largely on the bandwidth, as well as the observation noise level
international symposium on information theory | 2012
Tatsuto Murayama; Peter G Davis
We present a collective behavior in the optimal aggregation of noisy observations of a source. The quality of estimation of the source state involves a difficult tradeoff between sensing quality which increases by increasing the number of sensors, and aggregation quality which decreases if the number of sensors is too large. We analytically study the optimal strategy for large scale aggregation, and obtain an explicit and exact result by introducing a basic model. We show that larger scale aggregation always outperforms smaller scale aggregation at higher noise levels, while below a critical value of noise, there exist moderate scale aggregation levels at which optimal estimation is realized. We also examine the practical tradeoff between the above two aggregation strategies by applying an iterative encoding to linear codes.
Journal of Physics: Conference Series | 2010
Tatsuto Murayama; Peter Davis
Sensing and data aggregation tasks in distributed systems should not be considered as separate issues. The quality of collective estimation involves a fundamental tradeoff between sensing quality, which can be increased by increasing the number of sensors, and aggregation quality under a given capacity of the network, which decreases if the number of sensors is too large. In this paper, we examine a system level strategy for optimal aggregation of data from an ensemble of independent sensors. In particular, we consider large scale aggregation from very many sensors, in which case the network capacity diverges to infinity. Then, by applying the large deviations techniques, we conclude the following significant result: larger scale aggregation always outperforms smaller scale aggregation at higher noise levels, while below a critical value of noise, there exist moderate scale aggregation levels at which optimal estimation is realized. At a critical value of noise, there is an abrupt change in the behavior of a parameter characterizing the aggregation strategy, similar to a phase transition in statistical physics.
foundations of computational intelligence | 2007
Tatsuto Murayama; Peter Davis
This article presents a large-scale analysis of a distributed sensing model for systemized and networked sensors. In the system model, a data center acquires binary information from a bunch of L sensors which each independently encode their noisy observations of an original bit sequence, and transmit their encoded sequences to the data center at a combined data rate R, which is strictly limited. Supposing that the sensors use independent quantization techniques, we show that the performance can be evaluated for any given finite R when the number of sensors L goes to infinity. The analysis shows how the optimal strategy for the distributed sensing problem changes at critical values of the data rate R or the noise level p
international symposium on information theory | 2006
Tatsuto Murayama; Peter Davis
We consider the problem of distributed sensing in a noisy environment; the CEO problem. Here individual sensor readings are encoded and transmitted independently by multiple sensors with a limited combined data rate. We present a scaling analysis of a semi-practical system using the sparse matrix codes, and show that the analysis is consistent with a general argument based on an existence of the rate distortion function. This approach well describes the tradeoff between reducing errors due to environmental noise and increasing errors due to lossy coding as the number of sensors increases, showing threshold behavior for optimal number of sensors