Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José Mario De Martino is active.

Publication


Featured researches published by José Mario De Martino.


international conference on biometrics | 2013

Can face anti-spoofing countermeasures work in a real world scenario?

Tiago de Freitas Pereira; André Anjos; José Mario De Martino; Sébastien Marcel

User authentication is an important step to protect information and in this field face biometrics is advantageous. Face biometrics is natural, easy to use and less human-invasive. Unfortunately, recent work has revealed that face biometrics is vulnerable to spoofing attacks using low-tech equipments. This article assesses how well existing face anti-spoofing countermeasures can work in a more realistic condition. Experiments carried out with two freely available video databases (Replay Attack Database and CASIA Face Anti-Spoofing Database) show low generalization and possible database bias in the evaluated countermeasures. To generalize and deal with the diversity of attacks in a real world scenario we introduce two strategies that show promising results.


Eurasip Journal on Image and Video Processing | 2014

Face liveness detection using dynamic texture

Tiago de Freitas Pereira; Jukka Komulainen; André Anjos; José Mario De Martino; Abdenour Hadid; Matti Pietikäinen; Sébastien Marcel

User authentication is an important step to protect information, and in this context, face biometrics is potentially advantageous. Face biometrics is natural, intuitive, easy to use, and less human-invasive. Unfortunately, recent work has revealed that face biometrics is vulnerable to spoofing attacks using cheap low-tech equipment. This paper introduces a novel and appealing approach to detect face spoofing using the spatiotemporal (dynamic texture) extensions of the highly popular local binary pattern operator. The key idea of the approach is to learn and detect the structure and the dynamics of the facial micro-textures that characterise real faces but not fake ones. We evaluated the approach with two publicly available databases (Replay-Attack Database and CASIA Face Anti-Spoofing Database). The results show that our approach performs better than state-of-the-art techniques following the provided evaluation protocols of each database.


international conference on computer vision | 2012

LBP - TOP based countermeasure against face spoofing attacks

Tiago de Freitas Pereira; André Anjos; José Mario De Martino; Sébastien Marcel

User authentication is an important step to protect information and in this field face biometrics is advantageous. Face biometrics is natural, easy to use and less human-invasive. Unfortunately, recent work has revealed that face biometrics is vulnerable to spoofing attacks using low-tech cheap equipments. This article presents a countermeasure against such attacks based on the LBP−TOP operator combining both space and time information into a single multiresolution texture descriptor. Experiments carried out with the REPLAY ATTACK database show a Half Total Error Rate (HTER) improvement from 15.16% to 7.60%.


Computers & Graphics | 2006

Technical Section: Facial animation based on context-dependent visemes

José Mario De Martino; Léo Pini Magalhães; Fabio Violaro

This paper presents a novel approach for the generation of realistic speech synchronized 3D facial animation that copes with anticipatory and perseveratory coarticulation. The methodology is based on the measurement of 3D trajectories of fiduciary points marked on the face of a real speaker during the speech production of CVCV non-sense words. The trajectories are measured from standard video sequences using stereo vision photogrammetric techniques. The first stationary point of each trajectory associated with a phonetic segment is selected as its articulatory target. By clustering according to geometric similarity all articulatory targets of a same segment in different phonetic contexts, a set of phonetic context-dependent visemes accounting for coarticulation is identified. These visemes are then used to drive a set of geometric transformation/deformation models that reproduce the rotation and translation of the temporomandibular joint on the 3D virtual face, as well as the behavior of the lips, such as protrusion, and opening width and height of the natural articulation. This approach is being used to generate 3D speech synchronized animation from both natural and synthetic speech generated by a text-to-speech synthesizer.


Computers & Graphics | 1992

Production rendering on a local area network

José Mario De Martino; Rolf Köhling

Abstract The production of a computer animation is a computational demanding task, specially when we are rendering complex scenes with features such as anti-aliasing, shadowing, texturing, reflections, and refractions. Depending on the complexity and duration of the animation, the generation of all images with full resolution and high quality can take days, weeks, even months. The use of a distributed renderer running on a network of workstations is a cost-effective solution that not only shortens the processing time, but also to improves the reliability of the system. In this paper, we describe our approach to the problem, stressing in particular the load balancing and error recovery strategies of our solution.


computer, information, and systems sciences, and engineering | 2010

Towards a Transcription System of Sign Language for 3D Virtual Agents

Wanessa Machado do Amaral; José Mario De Martino

Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.


Proceedings of the SSPNET 2nd International Symposium on Facial Analysis and Animation | 2010

Compact 2D facial animation based on context-dependent visemes

Paula Dornhofer Paro Costa; José Mario De Martino

The expansion of mobile communications and the technological evolution of portable devices, raised new applications possibilities that demands the development of more efficient and intuitive interfaces.


American Journal of Orthodontics and Dentofacial Orthopedics | 2017

Influence of different setups of the Frankfort horizontal plane on 3-dimensional cephalometric measurements

Rodrigo Mologni Gonçalves dos Santos; José Mario De Martino; Francisco Haiter Neto; Luis Augusto Passeri

Introduction The Frankfort horizontal (FH) is a plane that intersects both porions and the left orbitale. However, other combinations of points have also been used to define this plane in 3‐dimensional cephalometry. These variations are based on the hypothesis that they do not affect the cephalometric analysis. We investigated the validity of this hypothesis. Methods The material included cone‐beam computed tomography data sets of 82 adult subjects with Class I molar relationship. A third‐party method of cone‐beam computed tomography‐based 3‐dimensional cephalometry was performed using 7 setups of the FH plane. Six lateral cephalometric hard tissue measurements relative to the FH plane were carried out for each setup. Measurement differences were calculated for each pair of setups of the FH plane. The number of occurrences of differences greater than the limits of agreement was counted for each of the 6 measurements. Results Only 3 of 21 pairs of setups had no occurrences for the 6 measurements. No measurement had no occurrences for the 21 pairs of setups. Setups based on left or right porion and both orbitales had the greatest number of occurrences for the 6 measurements. Conclusions This investigation showed that significant and undesirable measurement differences can be produced by varying the definition of the FH plane. HighlightsTest method shows that different setups of the FH plane affect the cephalometric analysis.Replacing the left orbitale by the right one in the FH plane affects the results.Replacing the right orbitale by the left one in the FH plane affects the results.Replacing the left porion by the right one in the FH plane affects the results.Replacing the right porion by the left one in the FH plane affects the results.


PROPOR | 2018

Identifying Intensification Processes in Brazilian Sign Language in the Framework of Brazilian Portuguese Machine Translation.

Francisco Aulísio dos Santos Paiva; José Mario De Martino; Plínio Almeida Barbosa; Pablo Picasso Feliciano de Faria; Ivani Rodrigues Silva; Luciana A. Rosa

Brazilian Portuguese (BP) to Brazilian Sign Language (Libras) machine translation differs from traditional automatic translation between oral languages specially because Libras is a visuospatial language. In our approach the final step of the translation process is a 3D avatar signing the translated content. However, to obtain an understandable signing is necessary that the translation takes into account all linguistic levels of Libras. Currently, BP-Libras translation approaches neglect important aspects of intensification of words and construction of plural nouns, generating inappropriate translations. This paper presents a study of the intensification of adjectives, verbs and nouns’ pluralization, with the aim of contributing to the advance of automatic sign language translation. We apply a hierarchical clustering method for the classification of Libras’ intensified signs considering the modification of manual parameters. The method allows for the identification of distinct intensification patterns and plural marking in BP sentences. The results of our study can be used to improve the intelligibility of the signing avatar.


Universal Access in The Information Society | 2017

Signing avatars: making education more inclusive

José Mario De Martino; Ivani Rodrigues Silva; Carmen Zink Bolognini; Paula Dornhofer Paro Costa; Kate Mamhy Oliveira Kumada; Luis Coradine; Patrick H. S. Brito; Wanessa Machado do Amaral; Angelo Brandão Benetti; Enzo Telles Poeta; Leandro Martin Angare; Carolina Monteiro Ferreira; Davi Faria De Conti

In Brazil, there are approximately 9.7 million inhabitants who are deaf or hard of hearing. Moreover, about 30% of the Brazilian deaf community is illiterate in Brazilian Portuguese due to difficulties to offer deaf children an inclusive environment based on bilingual education. Currently, the prevailing teaching practice depends heavily on verbal language and on written material, making the inclusion of the deaf a challenging task. This paper presents the author’s approach for tackling this problem and improving deaf students’ accessibility to written material in order to help them master Brazilian Portuguese as a second language. We describe an ongoing project aimed at developing an automatic Brazilian Portuguese-to-Libras translation system that presents the translated content via an animated virtual human, or avatar. The paper describes the methodology adopted to compile a source language corpus having the deaf student needs in central focus. It also describes the construction of a parallel Brazilian Portuguese/Brazilian Sign Language (Libras) corpus based on motion capture technology. The envisioned translation architecture includes the definition of an Intermediate Language to drive the signing avatar. The results of a preliminary assessment of signs intelligibility highlight the application potential.

Collaboration


Dive into the José Mario De Martino's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luis Augusto Passeri

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Léo Pini Magalhães

State University of Campinas

View shared research outputs
Top Co-Authors

Avatar

Tatiane Silvia Leite

State University of Campinas

View shared research outputs
Researchain Logo
Decentralizing Knowledge