Kenji Ohkuma
Toshiba
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kenji Ohkuma.
selected areas in cryptography | 2009
Kenji Ohkuma
The block cipher PRESENT designed as an ultra-light weight cipher has a 31-round SPN structure in which the S-box layer has 16-parallel 4-bit S-boxes and the diffusion layer is a bit permutation. The designers claimed that the maximum linear characteristic deviation is not more than 2? 43 for 28 rounds and concluded that PRESENT is not vulnerable to linear cryptanalysis. But we have found that 32% of PRESENT keys are weak for linear cryptanalysis, and the linear deviation can be much larger than the linear characteristic value by the multi-path effect. And we discovered a 28-round path with a linear deviation of 2? 39.3 for the weak keys. Furthermore, we found that linear cryptanalysis can be used to attack up to 24 rounds of PRESENT for the weak keys.
cryptology and network security | 2010
Tomoko Yonemura; Yoshikazu Hanatani; Taichi Isogai; Kenji Ohkuma; Hirofumi Muratani
Algebraic torus-based cryptosystems are public key cryptosystems based on the discrete logarithm problem, and have compact expressions compared with those of finite field-based cryptosystems. In this paper, we propose parameter selection criteria for the algebraic torus-based cryptosystems from the viewpoints of security and efficiency. The criteria include the following conditions: consistent resistance to attacks on algebraic tori and their embedding fields, and a large degree of freedom to select parameters suitable for each implementation. An extension degree and a characteristic size of a finite field on which the algebraic tori are defined are adjustable. We also provide examples of parameters satisfying the criteria.
acm symposium on applied computing | 1992
Kenji Ohkuma
A new learning algorithm, master learning algorithm (MLA), is proposed, which treats two evaluation functions for a neural networks, independently. In this method, the learning process is divided into two steps: optimization of total error function E for learning data, and that of subordinate evaluation function C. In the first step, the back propagation learning algorithm (BP) is applied to the learning data until the value of .E becomes small enough. In the second step, function C is minimized under a constraint of E. This method makes it possible to improve the network performance, such as generalization, robustness, and learnability for additional data, by using an appropriate subordinate function C. The applicability of this method is shown through the numerical experiments. This algorithm can be easily extended to cases with more than two evaluation functions.
Archive | 2001
Kenji Ohkuma; Fumihiko Sano; Hirofumi Muratani; Shinichi Kawamura
Archive | 2001
Hirofumi Muratani; Masahiko Motoyama; Kenji Ohkuma; Fumihiko Sano; Shinichi Kawamura
selected areas in cryptography | 2000
Kenji Ohkuma; Hirofumi Muratani; Fumihiko Sano; Shinichi Kawamura
Archive | 2004
Kouichi Ichimura; Noritsugu Shiokawa; Mikio Fujii; Kentaro Torii; Kenji Ohkuma
Archive | 2009
Tomoko Yonemura; Hirofumi Muratani; Atsushi Shimbo; Kenji Ohkuma; Taichi Isogai; Yuichi Komano; Kenichiro Furuta; Yoshikazu Hanatani
Archive | 2009
Yoshikazu Hanatani; Kenji Ohkuma; Atsushi Shimbo; Hirofumi Muratani; Taichi Isogai; Yuichi Komano; Kenichiro Furuta; Tomoko Yonemura
Archive | 2006
Kenji Ohkuma; Mikio Fujii; Kouichi Ichimura; Hayato Goto; Kentaro Torii