Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mireya S. García-Vázquez is active.

Publication


Featured researches published by Mireya S. García-Vázquez.


Signal Processing | 2014

Transcoding resilient video watermarking scheme based on spatio-temporal HVS and DCT

Antonio Cedillo-Hernandez; Manuel Cedillo-Hernandez; Mireya S. García-Vázquez; Mariko Nakano-Miyatake; Hector Perez-Meana; Alejandro Alvaro Ramírez-Acosta

Video transcoding is a legitimate operation widely used to modify video format in order to access the video content in the end-users devices, which may have some limitations in the spatial and temporal resolutions, bit-rate and video coding standards. In many previous watermarking algorithms the embedded watermark is not able to survive video transcoding, because this operation is a combination of some aggressive attacks, especially when lower bit-rate coding is required in the target device. As a consequence of the transcoding operation, the embedded watermark may be lost. This paper proposes a robust video watermarking scheme against video transcoding performed on base-band domain. In order to obtain the watermark robustness against video transcoding, four criteria based on Human Visual System (HVS) are employed to embed a sufficiently robust watermark while preserving its imperceptibility. The quantization index modulation (QIM) algorithm is used to embed and detect the watermark in 2D-Discrete Cosine Transform (2D-DCT) domain. The watermark imperceptibility is evaluated by conventional peak signal to noise ratio (PSNR) and structural similarity index (SSIM), obtaining sufficiently good visual quality. Computer simulation results show the watermark robustness against video transcoding as well as common signal processing operations and intentional attacks for video sequences.


mexican conference on pattern recognition | 2013

Video Images Fusion to Improve Iris Recognition Accuracy in Unconstrained Environments

Juan Miguel Colores-Vargas; Mireya S. García-Vázquez; Alejandro Alvaro Ramírez-Acosta; Hector Perez-Meana; Mariko Nakano-Miyatake

To date, research on the iris recognition systems are focused on the optimization and proposals of new stages for uncontrolled environment systems to improve the recognition rate levels. In this paper we propose to exploit the biometric information from video-iris, creating a fusioned normalized template through an image fusion technique. Indeed, this method merges the biometric features of a group of video images getting an enhanced image which therefore improves the recognition rates iris, in terms of Hamming distance, in an uncontrolled environment system. We analyzed seven different methods based on pixel-level and multi-resolution fusion techniques on a subset of images from the MBGC.v2 database. The experimental results show that the PCA method presents the best performance to improve recognition values according to the Hamming distances in 83% of the experiments.


international conference on artificial intelligence | 2011

Iris image evaluation for non-cooperative biometric iris recognition system

Juan M. Colores; Mireya S. García-Vázquez; Alejandro Alvaro Ramírez-Acosta; Hector Perez-Meana

During video acquisition of an automatic non-cooperative biometric iris recognition system, not all the iris images obtained from the video sequence are suitable for recognition. Hence, it is important to acquire high quality iris images and quickly identify them in order to eliminate the poor quality ones (mostly defocused images) before the subsequent processing. In this paper, we present the results of a comparative analysis of four methods for iris image quality assessment to select clear images in the video sequence. The goal is to provide a solid analytic ground to underscore the strengths and weaknesses of the most widely implemented methods for iris image quality assessment. The methods are compared based on their robustness to different types of iris images and the computational effort they require. The experiments with the built database (100 videos from MBGC v2) demonstrate that the best performance scores are generated by the kernel proposed by Kang & Park. The FAR and FRR obtained are 1.6% and 2.3% respectively.


international conference on biometrics | 2015

Cross-sensor iris verification applying robust fused segmentation algorithms

Eduardo Garea Llano; Juan M. Colores Vargas; Mireya S. García-Vázquez; Luis Miguel Zamudio Fuentes; Alejandro Alvaro Ramírez-Acosta

Currently, identity management systems work with heterogeneous iris images captured by different types of iris sensors. Indeed, iris recognition is being widely used in different environments where the identity of a person is necessary. Therefore, it is a challenging problem to maintain a stable iris recognition system which is effective for all type of iris sensors. This paper proposes a new cross-sensor iris recognition scheme that increases the recognition accuracy. The novelty of this work is the new strategy in applying robust fusion methods at level of segmentation stage for cross-sensor iris recognition. The experiments with the Casia-V3-Interval, Casia-V4-Thousand, Ubiris-V1 and MBGC-V2 databases show that our scheme increases the recognition accuracy and it is robust to different types of iris sensors while the user interaction is reduced.


mexican conference on pattern recognition | 2012

Adaptive spatial concealment of damaged coded images

Alejandro Alvaro Ramírez-Acosta; Mireya S. García-Vázquez; Mariko Nakano

The transmission over error-prone networks of still images or videos coded by block-based techniques like JPEG and MPEG respectively, may lead to block loss degrading, particularly the visual quality of images. Working under this environment, such as wireless communication where retransmission may be not feasible, application of error concealment techniques is consequently required to reduce degradation caused by the missing information. This paper surveys algorithms for spatial error concealment and proposes an adaptive and effective method based on edge analysis that performs well in current situations where significant loss of information is present and the data of the past reference images are not also available. The proposed method and the reviewed algorithms were implemented, tested and compared. Experimental results show that the proposed approach outperforms existing methods by up to 8.6 dB on average.


iberoamerican congress on pattern recognition | 2011

Local quality method for the iris image pattern

Luis Miguel Zamudio-Fuentes; Mireya S. García-Vázquez; Alejandro Alvaro Ramírez-Acosta

Recent researches on iris recognition without user cooperation have introduced video-based iris capturing approach. Indeed, it provides more information and more flexibility in the image acquisition stage for noncooperative iris recognition systems. However, a video sequence can contain images with different level of quality. Therefore, it is necessary to select the highest quality images from each video to improve iris recognition performance. In this paper, we propose as part of a video quality assessment module, a new local quality iris image method based on spectral energy analysis. This approach does not require the iris region segmentation to determine the quality of the image such as most of existing approaches. In contrast to other methods, the proposed algorithm uses a significant portion of the iris region to measure the quality in that area. This method evaluates the energy of 1000 images which were extracted from 200 iris videos from the MBGC NIR video database. The results show that the proposed method is very effective to assess the quality of the iris information. It obtains the highest 2 images energies as the best 2 images from each video in 226 milliseconds.


ieee electronics, robotics and automotive mechanics conference | 2010

IPTV Technology and Its Distribution in Home Networks

Alejandro Alvaro Ramírez-Acosta; Mireya S. García-Vázquez

The big difference between the technology of traditional broadcasting television and television using IP (Internet Protocol), is mainly the introduction of user interactivity in real time, transmitting information in a bidirectional manner. The launch of IPTV technology requires many functional elements that constitute a chain of contribution and full dissemination. This article presents the current situation of IPTV technology to facilitate the understanding of its technical aspects. It also describes various residential distribution scenarios for IPTV as well as considerations for the selection of domestic distribution mechanism.


electronics robotics and automotive mechanics conference | 2008

MPEG-4 AVC/H.264 and VC-1 Codecs Comparison Used in IPTV Video Streaming Technology

Alejandro A. Ramirez Acosta; Mireya S. García-Vázquez; Juan Miguel Colores-Vargas

Thanks to the permanent evolution of codecs for video and to the coding algorithms, every time with better performance, the television has found a new way of transmission across Internet. Taking into account these evolutions, the performance of the most commercial codecs based on MPEG-4 AVC/H.264 (RealNetworks, Apple) and VC-1 (Microsoft) standards is analyzed in this article. These codecs are the solutions nowadays for the diffusion of the television through IP network using video streaming technology. The codecs analysis is based on the evaluation of the main parameters that this technology requires. Also, the video streaming technology and its scheme is described.


Proceedings of the 2017 Workshop on Wearable MultiMedia | 2017

Semi-Automatic Annotation with Predicted Visual Saliency Maps for Object Recognition in Wearable Video

Jenny Benois-Pineau; Mireya S. García-Vázquez; L. A. Oropesa Morales; A. A. Ramirez Acosta

Recognition of objects of a given category in visual content is one of the key problems in computer vision and multimedia. It is strongly needed in wearable video shooting for a wide range of important applications in society. Supervised learning approaches are proved to be the most efficient in this task. They require available ground truth for training models. It is specifically true for Deep Convolution Networks, but is also hold for other popular models such as SVM on visual signatures. Annotation of ground truth when drawing bounding boxes (BB) is a very tedious task requiring important human resource. The research in prediction of visual attention in images and videos has attained maturity, specifically in what concerns bottom-up visual attention modeling. Hence, instead of annotating the ground truth manually with BB we propose to use automatically predicted salient areas as object locators for annotation. Such a prediction of saliency is not perfect, nevertheless. Hence active contours models on saliency maps are used in order to isolate the most prominent areas covering the objects. The approach is tested in the framework of a well-studied supervised learning model by SVM with psycho-visual weighted Bag-of-Words. An egocentric GTEA dataset was used in the experiment. The difference in mAP (mean average precision) is less than 10 percent while the mean annotation time is 36% lower.


Iete Technical Review | 2011

Streaming Media Portability with the Emerging Support Open MAX

Alejandro A. Ramfrez-Acosta; Mireya S. García-Vázquez; Sunil Kumar

Abstract In order to create multimedia products such as smart-phones, media players, gaming consoles among others, developers need to optimize the low-level code. Currently, this code is written mostly in assembly language and must be rewritten for each different hardware platform. For this reason, the development of media infrastructure is very expensive, time consuming for integration and programming. Therefore, the media infrastructure portability is a multi-level industry problem. The application programmers need a cross-platform portable API (Application Programming Interface) for controlling high-level media operations. The system integrators need cross-vendor standard for media component integration with sophisticated data routing and robust synchronization. The software component vendors and the silicon vendors need a reliable way to accelerate diverse codecs on diverse silicon. OpenMAX helps solve this industry problem. OpenMAX was founded to meet the need to accelerate the growth of multimedia platforms and therefore the creation of final products. The end result is a standardized set of open APIs for a variety of multimedia applications. OpenMAX define a royalty-free cross-platform API that standardizes access to multimedia processing primitives used extensively in video codecs such as MPEG-4, H.264/AVC, audio and image codecs and 2D and 3D graphics. OpenMAX defines three media as an open standard, designed holistically to provide complete media infrastructure portability. The authors present a description of the OpenMAX standard and the three layers that form the structure of multimedia portability. Some implementations in the market are also presented.

Collaboration


Dive into the Mireya S. García-Vázquez's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hector Perez-Meana

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar

Juan Miguel Colores-Vargas

Autonomous University of Baja California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jessica Beltrán

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar

Juan M. Colores Vargas

Autonomous University of Baja California

View shared research outputs
Top Co-Authors

Avatar

Mariko Nakano-Miyatake

Instituto Politécnico Nacional

View shared research outputs
Top Co-Authors

Avatar

Sunil Kumar

San Diego State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge