Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daigo Muramatsu is active.

Publication


Featured researches published by Daigo Muramatsu.


international conference on document analysis and recognition | 2003

An HMM online signature verifier incorporating signature trajectories

Daigo Muramatsu; Takashi Matsumoto

Authentication of individuals is rapidly becoming an important issue. On-line signature verification is one of the methods that use biometric features. This paper proposes a new HMM algorithm for on-line signature verification. After preprocessing, input signature is discretized in a polar coordinate system. This particular discretization leads to a simple procedure for assigning initial state and state transition probabilities. This paper utilizes only pen position trajectories, no other information is used which makes the algorithm simple and fast. A preliminary experiment shows that the proposed algorithm appears to be promising.


IEEE Transactions on Image Processing | 2015

Gait-Based Person Recognition Using Arbitrary View Transformation Model

Daigo Muramatsu; Akira Shiraishi; Yasushi Makihara; Md. Zasim Uddin; Yasushi Yagi

Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.


IEEE Transactions on Information Forensics and Security | 2006

A Markov chain Monte Carlo algorithm for bayesian dynamic signature verification

Daigo Muramatsu; Mitsuru Kondo; Masahiro Sasaki; Satoshi Tachibana; Takashi Matsumoto

Authentication of handwritten signatures is becoming increasingly important. With a rapid increase in the number of people who access Tablet PCs and PDAs, online signature verification is one of the most promising techniques for signature verification. This paper proposes a new algorithm that performs a Monte Carlo based Bayesian scheme for online signature verification. The new algorithm consists of a learning phase and a testing phase. In the learning phase, semi-parametric models are trained using the Markov Chain Monte Carlo (MCMC) technique to draw posterior samples of the parameters involved. In the testing phase, these samples are used to evaluate the probability that a signature is genuine. The proposed algorithm achieved an EER of 1.2% against the MCYT signature corpus where random forgeries are used for learning and skilled forgeries are used for evaluation. An experimental result is also reported with skilled forgery data for learning.


international conference on biometrics | 2007

Effectiveness of pen pressure, azimuth, and altitude features for online signature verification

Daigo Muramatsu; Takashi Matsumoto

Many algorithms for online signature verification using multiple features have been proposed. Recently it has been argued that pen pressure, azimuth, and altitude can cause instability and deteriorate the performance. Algorithms without pen pressure and inclination features outperformed with them in SVC2004. However, we previously found that these features improved the performance in evaluations using our private database. The effectiveness of the features thus depended on the algorithm. Therefore, we re-evaluated our algorithm using the same database as used in SVC2004 and discuss the effectiveness of pen pressure, azimuth and altitude. Experimental results show that even though these features are not so effective when they are used by themselves, they improved the performance when used in combination with other features. When pen pressure and inclination features were considered, an EER of 3.61% was achieved, compared to an EER of 5.79% when these features were not used.


international conference on electronics circuits and systems | 1998

A novel highly linear CMOS buffer

Khayrollah Hadidi; J. Sobhi; A. Hasankhaan; Daigo Muramatsu; Takashi Matsumoto

A high input impedance, low output impedance, highly linear CMOS buffer is very desirable in many analog CMOS circuits. Conventional buffers employ feedback which severely limits bandwidth and linearity at even medium range frequencies. A recent open loop buffer, although linear, suffers from large power supply voltage requirement. This makes it undesirable for low voltage deep submicron processes. Here going to basics leads us to a very simple and very linear buffer which shows more than 26 dB THD improvement over conventional designs.


IEEE Transactions on Systems, Man, and Cybernetics | 2016

View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition

Daigo Muramatsu; Yasushi Makihara; Yasushi Yagi

Cross-view gait recognition authenticates a person using a pair of gait image sequences with different observation views. View difference causes degradation of gait recognition accuracy, and so several solutions have been proposed to suppress this degradation. One useful solution is to apply a view transformation model (VTM) that encodes a joint subspace of multiview gait features trained with auxiliary data from multiple training subjects, who are different from test subjects (recognition targets). In the VTM framework, a gait feature with a destination view is generated from that with a source view by estimating a vector on the trained joint subspace, and gait features with the same destination view are compared for recognition. Although this framework improves recognition accuracy as a whole, the fit of the VTM depends on a given gait feature pair, and causes an inhomogeneously biased dissimilarity score. Because it is well known that normalization of such inhomogeneously biased scores improves recognition accuracy in general, we therefore propose a VTM incorporating a score normalization framework with quality measures that encode the degree of the bias. From a pair of gait features, we calculate two quality measures, and use them to calculate the posterior probability that both gait features originate from the same subjects together with the biased dissimilarity score. The proposed method was evaluated against two gait datasets, a large population gait dataset of over-ground walking (course dataset) and a treadmill gait dataset. The experimental results show that incorporating the quality measures contributes to accuracy improvement in many cross-view settings.


Ipsj Transactions on Computer Vision and Applications | 2013

Gait Verification System for Criminal Investigation

Haruyuki Iwama; Daigo Muramatsu; Yasushi Makihara; Yasushi Yagi

This paper describes the first gait verification system for criminal investigation using footages from surveillance cameras. The system is designed so that the criminal investigators as non-specialists on computer vision-based gait verification can, independently, use it to verify unknown perpetrators as suspects or ex-convicts in criminal investigations. Each step of the gait verification process is proceeded by interactive operation on a graphics-user interface. Eventually, for each pair of compared subjects selected by a user, the system outputs a posterior probability on a verification result, which indicates that compared subjects are the same, with the consideration of various circumstances of the subjects such as the size, frame-rate, observation views, and clothing of subjects. One gait-specialist and ten nongait-specialists participated in operation tests of the system using five different datasets with various types of scenes, each of which contained two or three verification sets. It was shown that all the non-gait-specialists, as well as the gait-specialist, could obtain reasonable verification results for almost all of the verification sets.


international conference on biometrics | 2016

GEINet: View-invariant gait recognition using a convolutional neural network

Kohei Shiraga; Yasushi Makihara; Daigo Muramatsu; Tomio Echigo; Yasushi Yagi

This paper proposes a method of gait recognition using a convolutional neural network (CNN). Inspired by the great successes of CNNs in image recognition tasks, we feed in the most prevalent image-based gait representation, that is, the gait energy image (GEI), as an input to a CNN designed for gait recognition called GEINet. More specifically, GEINet is composed of two sequential triplets of convolution, pooling, and normalization layers, and two subsequent fully connected layers, which output a set of similarities to individual training subjects. We conducted experiments to demonstrate the effectiveness of the proposed method in terms of cross-view gait recognition in both cooperative and uncooperative settings using the OU-ISIR large population dataset. As a result, we confirmed that the proposed method significantly outperformed state-of-the-art approaches, in particular in verification scenarios.


Journal of Network and Computer Applications | 2010

Visual-based online signature verification using features extracted from video

Kumiko Yasuda; Daigo Muramatsu; Satoshi Shirato; Takashi Matsumoto

We propose a visual-based online signature verification system. The input module of the system consists of only low-cost cameras (webcams) and does not need an electronic tablet. Online signature data are obtained from the images captured by the webcams by tracking the pen tip. The pen tip tracking is implemented by the sequential Monte Carlo method. Then, the distance between the input signature data and reference signature data enrolled in advance is computed. Finally, the input signature is classified as genuine or a forgery by comparing the distance with a threshold. In this paper, we consider seven camera positions. We performed experiments using a private database consisting of 150 genuine signatures to decide the best camera position. The experimental results show that we should place the webcam to the side of the hand. Moreover, we evaluated the proposed system with a camera placed to the side of the hand against a different database consisting of 390 genuine signatures and 1560 skilled forged signatures. The proposed system achieved an equal error rate of 4.1% against this database.


Lecture Notes in Computer Science | 2003

An HMM on-line signature verification algorithm

Daigo Muramatsu; Takashi Matsumoto

Authentication of individuals is rapidly becoming an important issue. On-line signature verification is one of the methods that use biometric features of individuals. This paper proposes a new HMM algorithm for on-line signature verification incorporating signature trajectories. The algorithm utilizes only pen position trajectories. No other information is used which makes the algorithm simple and fast. A Preliminary experiment was performed and the intersection of FAR and FRR was 2.78%.

Collaboration


Dive into the Daigo Muramatsu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge