Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mariko Nakano-Miyatake is active.

Publication


Featured researches published by Mariko Nakano-Miyatake.


international conference on electronics, communications, and computers | 2009

A Methodology of Steganalysis for Images

A. Hernandez-Chamorro; Angelina Espejel-Trujillo; J. Lopez-Hernandez; Mariko Nakano-Miyatake; Hector Perez-Meana

This paper provides a comparison of some of the steganalysis methods proposed in the literature, and using these comparison results, a global steganalysis methodology is proposed. The secret message detection capacities of these steganalysis methods are evaluated using stegoimages generated by typical data hiding algorithms. The evaluation of steganalysis methods is realized in terms of false negative and false positive error rates using 100 images. There isn’t any steganalysis that can detect presence of secret message in all type of stegoimages. Therefore, to realize a reliable analysis about a suspicious image, several steganalysis methods must be efficiently combined. In this paper, some considerations about steganalysis are provided using the results obtained of the comparison of steganalysis methods.


international work conference on artificial and natural neural networks | 1997

A Continuous Time Structure for Filtering and Prediction Using Hopfield Neural Networks

Hector Perez-Meana; Mariko Nakano-Miyatake

Transversal FIR adaptive filters have been widely used in echo and noise cancelers systems, in equalization of communication channels, in speech coders and predictive deconvolution systems in seismic exploration etc., almost all of these systems implemented in a digital way. This is because with the advance of digital technology it became possible to implement sophisticated and efficient adaptive filter algorithms. However, even with the great advance of the digital technology, the transversal FIR adaptive filters still present several limitations when required to handle in real time frequencies higher than those in the audio range, or when a relative large number of taps are required. To avoid the limitations of digital FIR adaptive filters, several different structures have been proposed. Among them, the analog filters appears to be e desirable alternative to transversal FIR digital adaptive filters because they have the ability to handle very high frequencies, and their size and power requirements are potentially much smaller than their digital counterparts. This paper propose a continuous time transversal adaptive filter structure whose coefficients are estimated in a continuous time way by using an artificial Hopfield Neural Network. Simulation results using the proposed structure in cancellation, prediction and equalization configurations are given to show the desirable features of the proposed structure.


midwest symposium on circuits and systems | 2004

On-line handwritten character recognition using spline function

Karina Toscano-Medina; R. Toscano-Medina; Mariko Nakano-Miyatake; Hector Perez-Meana; M. Yasuhara

During the last several years there have been developed many systems which are able to simulate the human brain behavior. To achieve this goal, two of the most important paradigms used are the neural networks and the artificial intelligence. Both of them are primary tools for development of systems capable of performing tasks such as: handwritten characters, voice, faces, signatures recognition and so many other biometric applications that have attracted considerable attention during the last few years. In this paper a new algorithm for cursive handwritten characters recognition based on the spline function is proposed, in which the inverse order of the handwritten character construction task is used to recognize the character. From the sampled data obtained by using a digitizer board, the sequence of the most significant points (optimal knots) of the handwriting character are obtained, and then the natural spline function and the steepest descent methods are used to interpolate and approximate character shape. Using a training set consisting of the sequence of optimal knots, each character model is constructed. Finally the unknown input character is compared by all characters models to get the similitude scores. The character model with higher similitude score is considered as the recognized character of the input data. The proposed system is evaluated by computer simulation and simulation results show the global recognition rate with 93.5%.


Archive | 2010

Video Watermarking Technique using Visual Sensibility and Motion Vector

Mariko Nakano-Miyatake; Hector Perez-Meana

Together with the rapid growth of Internet service, copyright violation problems, such as unauthorized duplication and alteration of digital materials, have increased considerably (Langelaar et al., 2001). Therefore copyright protection over the digital materials is a very important issue that requires an urgent solution. The watermarking is considered as a viable technique to solve this problem. Until now, numerous watermarking algorithms have been proposed. Most of them are image watermarking algorithms and relatively few of them are related with video sequences. Although image watermarking algorithms can be used to protect the video signal, generally they are not efficient for this purpose, because image watermarking algorithms does not consider neither temporal redundancy of the video signal nor temporal attacks, which are efficient attacks against video watermarking (Swanson et al., 1998). Generally, in the watermarking schemes for copyright protection, the embedded watermark signal must be imperceptible and robust against common attacks, such as lossy compression, cropping, noise contamination and filtering (Wolfgang et al., 1999). In addition, video watermarking algorithms must satisfy the following requirements: a blind detection, high speed process and conservation of video file size. The blind detection means that the watermark detection process does not require original video sequence, and the temporal complexity of watermark detection must not affect video decoding time. Also the file size of video sequence must be similar, before and after watermarking. Due to the redundancy of the video sequence, some attacks such as frame dropping and frame averaging can effectively destroy the embedded watermark, without cause any degradation to the video signal. A design of an efficient video watermarking algorithm must consider this type of attacks (Wolfgang et al., 1999). Basically, video watermarking algorithms can be classified into three categories: watermarking in base band (Wolfgang et al., 1999; Hartung & Girod 1998; Swanson et al., 1998; Kong et al., 2006), watermarking during video coding process (Liu et al., 2004; Zhao et al., 2003; Ueno 2004; Noorkami & Mersereau 2006) and watermarking in coded video sequence (Wang et al., 2004; Biswas et al., 2005; Langelaar & Lagendijk 2002). In the base band technique, the watermarking process is realized in uncompressed video stream, in which almost all image watermarking algorithms can be used, however generally computational complexity for watermark embedding and detection is considerably high for


international symposium on neural networks | 1997

Adaptive filtering and prediction based on Hopfield neural networks

Mariko Nakano-Miyatake; Hector Perez-Meana

Adaptive filters have been successfully used in the solutions of several practical problems such as echo and noise cancelers, line enhancers, speech coding, equalizers, etc. Due to that, intensive research have been carried out to develop more efficient adaptive filter structures and adaptation algorithms, almost all of them implemented in a digital way. This is because with the advance of digital technology it is possible to implement more sophisticated and efficient adaptive filter algorithms. However the adaptive digital filters still present several limitations when required to handle frequencies higher than those in the audio range. Recently the interest on adaptive analog filters has grow because they have the ability to handle much higher frequencies, and their size and power requirements are potentially much smaller than their digital counterparts. This paper propose an analog adaptive structure for filtering and prediction whose coefficients are estimated in a continuous time way by using an artificial Hopfield neural network. Simulation results are given to show the desirable features of the proposed structure.


midwest symposium on circuits and systems | 1995

VLSI implementation of an extended Hamming neural network for non-binary pattern recognition

Luis Nino-de-Rivera; Mariko Nakano-Miyatake; J.C. Sanchez; Hector Perez-Meana; Edgar Sánchez-Sinencio

VLSI implementation of an extended Hamming neural network to classify non-binary input patterns is proposed. The Hamming extended network splits the input of a non-binary pattern into N binary input patterns, where N is the number of bits used for representing each pixel of the original input pattern. A Hamming neural network process each of the N binary input patterns in which the image is divided. A Winner-Take-All (WTA) structure selects the winner layer. The layer that has more winners decides the winner image. The VLSI Hamming network processes each binary pattern to measure the Hamming distance among the patterns under test and the reference. A CMOS WTA circuit identifies the winner pattern. An analog distance between patterns is proposed as an alternative method. The numerical difference between two pixels in the same relative position is computed and accumulated. The hole difference between every image and the reference pattern indicates how close an image is from the others. An architecture for this proposal is shown and compared with the extended neural network structure.


Archive | 2012

Authentication of Script Format Documents Using Watermarking Techniques

Mario Gonzalez-Lee; Mariko Nakano-Miyatake; Hector Perez-Meana

The electronic document authentication is a subject of active research because, with the release of very efficient program for documents, images and video processing, the manipulation of such digital content becomes easier. Then, the development of efficient methods allowing the protection of sensitive digital material, avoiding unauthorized manipulations, without degradation of the original materials is a very important task that has found application in the solution of many practical problems in the financial, banking, insurances, legal, and Government fields, among others.


Archive | 2012

Multidimensional Features Extraction Methods in Frequency Domain

Jesus Olivares-Mercado; Gualberto Aguilar-Torres; Karina Toscano-Medina; Gabriel Sanchez-Perez; Mariko Nakano-Miyatake; Hector Perez-Meana

Pattern recognition have been a topic of active research during the 30 years, due to the high performance that these schemes presents, when they have been used in the solution of many practical problems in several fields of science, medicine and engineering. The efficiency of pattern recognition algorithms strongly depends in an accurate features extraction scheme that be able to represent the pattern under analysis using a number of parameters as small as possible, while keeping a large intra-pattern and very low inter-pattern similarities. These requirements have led to the development of several feature extraction methods, which can be divided in three groups. Feature extraction methods in time domain, spatial domain and frequency domain. In all cases the proposed feature extraction methods strongly depend of the specify applications. Thus the features extraction methods performing well in some applications, may do not perform well in others, for example the features extraction methods used for speech or speaker recognition are quite different to those used for fingerprints or face recognition. This chapter presents an analysis of some successful frequency domain feature extraction methods that have been proposed for applications involving audio, speech and images pattern recognition. Evaluation results are also provided to show the effectiveness of such feature extraction methods.


international conference on electronics, communications, and computers | 2009

Document Authentication Scheme Using Characters Metrics

M. Garcia-Horta; Mario Gonzalez-Lee; Mariko Nakano-Miyatake; Hector Perez-Meana

In this paper, we propose a document authentication scheme, in which the document data has PDF format. The proposed scheme is an improved version of Zhu’s document protection scheme. In Zhu’s scheme, the render sequence is encoded, such that a predefined character of the render sequence is permuted by secret key. This permutation is then used as authentication code. Their scheme has two inconveniences for practical applications: The first one is that the file size of encoded document is considerably increased compared with the original one, the second one is that the structure of encoded render sequence is unnatural, and as consequence, it can be detected easily by third party and reverse engineering can be applied to tamper the document. To solve these problems, this paper propose a document authentication scheme, in which the encoded sequence is kept in its natural structure and the file size of the encoded document is not increased.


international symposium on industrial electronics | 2000

Time scaling algorithm of speech signal to assist learning of a foreign language

P. Rodriguez-Peralta; Mariko Nakano-Miyatake; Hector Perez-Meana; Gonzalo Duchen-Sanchez

For basic level students of foreign languages, normal speed speaking is very difficult to understand perfectly, because of lack of training in understanding of oral language. However when the speed of speaking slows down, in most cases understanding increases. This fact suggests that to improve learning of the foreign language, it is necessary that students can adjust the speed of speaking according to their own understanding level. This paper presents a comparison of two time scaling algorithms when they are used to assist learning of a foreign language. Both algorithms consist of a pitch detection stage and time scaling stage. The pitch detection of both algorithms is based on autocorrelation method of the speech signals proposed by Rabiner et. al. (1976). The time scaling in the first method consists in duplicating the pitch periods of voiced segments while keeping unchanged unvoiced ones. The second method is based on the short time Fourier transform. Experimental results, MOS (mean opinion scoring) are given using Spanish, French, German, Russian, Japanese and Italian which show desirable features of both time scaling algorithms when they are used to assist the students to learn foreign languages. The performance of both algorithms when the pitch detection stage has some noise is also shown.

Collaboration


Dive into the Mariko Nakano-Miyatake's collaboration.

Top Co-Authors

Avatar

Hector Perez-Meana

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akira Kurematsu

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar

Antonio Cedillo-Hernandez

National Autonomous University of Mexico

View shared research outputs
Top Co-Authors

Avatar

Francisco J. García-Ugalde

National Autonomous University of Mexico

View shared research outputs
Top Co-Authors

Avatar

Gabriel Sanchez-Perez

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar

Karina Toscano-Medina

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar

Manuel Cedillo-Hernandez

National Autonomous University of Mexico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clara Cruz-Ramos

Instituto Politécnico Nacional

View shared research outputs
Researchain Logo
Decentralizing Knowledge