Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Toshiyasu Matsushima is active.

Publication


Featured researches published by Toshiyasu Matsushima.


IEEE Transactions on Information Theory | 1991

A class of distortionless codes designed by Bayes decision theory

Toshiyasu Matsushima; Hiroshige Inazumi; Shigeichi Hirasawa

The problem of distortionless encoding when the parameters of the probabilistic model of a source are unknown is considered from a statistical decision theory point of view. A class of predictive and nonpredictive codes is proposed that are optimal within this framework. Specifically, it is shown that the codeword length of the proposed predictive code coincides with that of the proposed nonpredictive code for any source sequence. A bound for the redundancy for universal coding is given in terms of the supremum of the Bayes risk. If this supremum exists, then there exists a minimax code whose mean code length approaches it in the proposed class of codes, and the minimax code is given by the Bayes solution relative to the prior distribution of the source parameters that maximizes the Bayes risk. >


fast software encryption | 2007

New bounds for PMAC, TMAC, and XCBC

Kazuhiko Minematsu; Toshiyasu Matsushima

We provide new security proofs for PMAC, TMAC, and XCBC message authentication modes. The previous security bounds for these modes were σ2/2n, where n is the block size in bits and σ is the total number of queried message blocks. Our new bounds are lq2/2n for PMAC and lq2/2n + 4q2/22n for TMAC and XCBC, where q is the number of queries and l is the maximum message length in n-bit blocks. This improves the previous results under most practical cases, e.g., when no message is exceptionally long compared to other messages.


international conference on progress in cryptology | 2007

Tweakable enciphering schemes from hash-sum-expansion

Kazuhiko Minematsu; Toshiyasu Matsushima

We study a tweakable blockcipher for arbitrarily long message (also called a tweakable enciphering scheme) that consists of a universal hash function and an expansion, a keyed function with short input and long output. Such schemes, called HCTR and HCH, have been recently proposed. They used (a variant of) the counter mode of a block-cipher for the expansion. We provide a security proof of a structure that underlies HCTR and HCH. We prove that the expansion can be instantiated with any function secure against Known-plaintext attacks (KPAs), which is called a weak pseudorandom function (WPRF). As an application of our proof, we provide efficient blockcipher-based schemes comparable to HCH and HCTR. For the double-block-length case, our result is an interesting extension of previous attempts to build a double-block-length cryptographic permutation using WPRF.


international symposium on information theory | 2007

An Algorithm for Computing the Secrecy Capacity of Broadcast Channels with Confidential Messages

Kensuke Yasui; Tota Suko; Toshiyasu Matsushima

In this paper, we present an iterative algorithm for computing the secrecy capacity of broadcast channel with confidential message (BCC) in the situation that the main channel is less noisy than the eavesdroppers channel. The global convergence of the algorithm is proved, and an expression for its convergence rate is derived.


IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2006

A Modification Method for Constructing Low-Density Parity-Check Codes for Burst Erasures

Gou Hosoya; Hideki Yagi; Toshiyasu Matsushima; Shigeichi Hirasawa

We study a modification method for constructing low-density parity-check (LDPC) codes for solid burst erasures. Our proposed modification method is based on a column permutation technique for a parity-check matrix of the original LDPC codes. It can change the burst erasure correction capabilities without degradation in the performance over random erasure channels. We show by simulation results that the performance of codes permuted by our method are better than that of the original codes, especially with two or more solid burst erasures.


IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2007

A Note on the ε-Overflow Probability of Lossless Codes

Ryo Nomura; Toshiyasu Matsushima; Shigeichi Hirasawa

In this letter, we generalize the achievability of variable-length coding from two viewpoints. One is the definition of an overflow probability, and the other is the definition of an achievability. We define the overflow probability as the probability of codeword length, not per symbol, is larger than ηn and we introduce the e-achievability of variable-length codes that implies an existence of a code for the source under the condition that the overflow probability is smaller than or equal to e. Then we show that the e-achievability of variable-length codes is essentially equivalent to the e-achievability of fixed-length codes for general sources. Moreover by using above results, we show the condition of e-achievability for some restricted sources given e.


international symposium on information theory | 2010

Toward computing the capacity region of degraded broadcast channel

Kensuke Yasui; Toshiyasu Matsushima

Recently, computing the capacity region of the degraded broadcast channel (DBC) was showed as a nonconvex optimization problem by Calvo et al [6]. There seems to be no efficient method to solve in polynomial time due to the lack of convexity. In other nonconvex optimization problem, however, Kumar et al showed that Arimoto-Blahut type algorithm converges to the global optimum when some conditions hold [12]. In this paper, we present Arimoto-Blahut type algorithm toward computing the capacity region of the DBC. By using Kumars method, we prove the global convergence of the algorithm when some conditions hold and derive an expression for its convergence rate.


IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2006

Fast Algorithm for Generating Candidate Codewords in Reliability-Based Maximum Likelihood Decoding

Hideki Yagi; Toshiyasu Matsushima; Shigeichi Hirasawa

We consider the reliability-based heuristic search methods for maximum likelihood decoding, which generate test error patterns (or, equivalently, candidate codewords) according to their heuristic values. Some studies have proposed methods for reducing the space complexity of these algorithms, which is crucially large for long block codes at medium to low signal to noise ratios of the channel. In this paper, we propose a new method for reducing the time complexity of generating candidate codewords by storing some already generated candidate codewords. Simulation results show that the increase of memory size is small.


IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences | 2006

A Note on Construction of Orthogonal Arrays with Unequal Strength from Error-Correcting Codes

Tomohiko Saito; Toshiyasu Matsushima; Shigeichi Hirasawa

Orthogonal Arrays (OAs) have been playing important roles in the field of experimental design. It has been known that OAs are closely related to error-correcting codes. Therefore, many OAs can be constructed from error-correcting codes. But these OAs are suitable for only cases that equal interaction effects can be assumed, for example, all two-factor interaction effects. Since these cases are rare in experimental design, we cannot say that OAs from error-correcting codes are practical. In this paper, we define OAs with unequal strength. In terms of our terminology, OAs from error-correcting codes are OAs with equal strength. We show that OAs with unequal strength are closer to practical OAs than OAs with equal strength. And we clarify the relation between OAs with unequal strength and unequal error-correcting codes. Finally, we propose some construction methods of OAs with unequal strength from unequal error-correcting codes.


australian communications theory workshop | 2010

A Linear Programming bound for Unequal Error Protection codes

Tomohiko Saito; Yoshifumi Ukita; Toshiyasu Matsushima; Shigeichi Hirasawa

In coding theory, it is important to calculate an upper bound for the size of codes given the length and minimum distance. The Linear Programming (LP) bound is known as a good upper bound for the size of codes. On the other hand, Unequal Error Protection (UEP) codes have been studied in coding theory. In UEP codes, a codeword has special bits which are protected against a greater number of errors than other bits. In this paper, we propose a LP bound for UEP codes. Firstly, we generalize the distance distribution (or weight distribution) of codes. Under the generalization, we lead to the LP bound for UEP codes. And we show a numerical example of the LP bound for UEP codes. Lastly, we compare the proposed bound with a modified Hamming bound.

Collaboration


Dive into the Toshiyasu Matsushima's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hideki Yagi

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomohiko Saito

Aoyama Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge