Tiago Maritan Ugulino de Araújo
Federal University of Paraíba
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tiago Maritan Ugulino de Araújo.
Information Sciences | 2014
Tiago Maritan Ugulino de Araújo; Felipe Silva Ferreira; Danilo Assis Nobre dos S. Silva; Leonardo Dantas de Oliveira; Eduardo De Lucena Falcão; Leonardo Araújo Domingues; Vandhuy F. Martins; Igor A. C. Portela; Yúrika Sato Nóbrega; Hozana Raquel Gomes De Lima; Guido Lemos de Souza Filho; Tatiana Aires Tavares; Alexandre Nóbrega Duarte
Deaf people have serious problems to access information due to their inherent difficulties to deal with spoken and written languages. This work tries to address this problem by proposing a solution for automatic generation and insertion of sign language video tracks into captioned digital multimedia content. Our solution can process a subtitle stream and generate the sign language track in real-time. Furthermore, it has a set of mechanisms that exploit human computation to generate and maintain their linguistic constructions. The solution was instantiated for the Digital TV, Web and Digital Cinema platforms and evaluated through a set of experiments with deaf users.
brazilian symposium on multimedia and the web | 2015
Manuella A. C. B. Lima; Tiago Maritan Ugulino de Araújo; Erickson S. de Oliveira
In scientific literature, there are some solutions that address the machine automatic of sign language contents. These solutions aim to reduce the communication and access to information barriers of deaf people in Information and Communication Technologies (ICTs). However, the most of these solutions do not explore syntactic and semantic aspects in the machine translation process, especially when they are designed to be general area. As a result, there is a limitation on the quality of the accessible generated contents, and a consequent resistance of deaf users for using these solutions. To reduce this problem, in this paper, we propose a solution that incorporates syntactic and semantic aspects in the translation of VLibras, a service of machine generation of Brazilian Sign Language (LIBRAS) content in for ICT (Digital TV, Web, Digital Cinema and mobile devices). This solution involves a formal rule description language modeled to create translation rules; the definition of a grammar exploiting these features; and their integration with the VLibras service. To evaluate the solution, some computational tests were performed using WER and BLEU metrics to assess the quality of the output generated by the solution. The results show that the proposed approach could improve the results of the current version of VLibras translator.
Journal of Heuristics | 2016
Tiago Maritan Ugulino de Araújo; Lisieux Marie Marinho dos Santos Andrade; Carlos Magno; Lucídio dos Anjos Formiga Cabral; Roberto Quirino do Nascimento; Cláudio Nogueira de Meneses
Several papers in the scientific literature use metaheuristics to solve continuous global optimization. To perform this task, some metaheuristics originally proposed for solving combinatorial optimization problems, such as Greedy Randomized Adaptive Search Procedure (GRASP), Tabu Search and Simulated Annealing, among others, have been adapted to solve continuous global optimization problems. Proposed by Hirsch et al., the Continuous-GRASP (C-GRASP) is one example of this group of metaheuristics. The C-GRASP is an adaptation of GRASP proposed to solve continuous global optimization problems under box constraints. It is simple to implement, derivative-free and widely applicable method. However, according to Hedar, due to its random construction, C-GRASP may fail to detect promising search directions especially in the vicinity of minima, which may result in a slow convergence. To minimize this problem, in this paper we propose a set of methods to direct the search on C-GRASP, called Directed Continuous-GRASP (DC-GRASP). The proposal is to combine the ability of C-GRASP to diversify the search over the space with some efficient local search strategies to accelerate its convergence. We compare the DC-GRASP with the C-GRASP and other metaheuristics from literature on a set of standard test problems whose global minima are known. Computational results show the effectiveness and efficiency of the proposed methods, as well as their ability to accelerate the convergence of the C-GRASP.
Journal of the Brazilian Computer Society | 2013
Tiago Maritan Ugulino de Araújo; Felipe Silva Ferreira; Danilo Assis Nobre dos S. Silva; Felipe Hermínio Lemos; Gutenberg Pessoa Botelho Neto; Derzu Omaia; Guido Lemos de Souza Filho; Tatiana Aires Tavares
Deaf people have serious difficulties accessing information. The support for sign language (their primary means of communication) is rarely addressed in information and communication technologies. Furthermore, there is a lack of works related to machine translation for sign language in real-time and open-domain scenarios, such as TV. To minimize these problems, in this paper, we propose an architecture for machine translation to Brazilian sign language (LIBRAS) and its integration, implementation and evaluation for digital TV systems, a real-time and open-domain scenario. The system, called LibrasTV, allows the LIBRAS windows to be generated and displayed automatically from a closed caption input stream in Brazilian Portuguese. LibrasTV also uses some strategies, such as low time consuming, text-to-gloss machine translation and LIBRAS dictionaries to minimize the computational resources needed to generate the LIBRAS windows in real-time. As a case study, we implemented a prototype of LibrasTV for the Brazilian digital TV system and performed some tests with Brazilian deaf users to evaluate it. Our preliminary evaluation indicated that the proposal is efficient, as long as its delays and bandwidth are low. In addition, as previously mentioned in the literature, avatar-based approaches are not the first choice for the majority of deaf users, who prefer human translation. However, when human interpreters are not available, our proposal is presented as a practical and feasible alternative to fill this gap.
2012 14th Symposium on Virtual and Augmented Reality | 2012
Danilo Assis Nobre dos S. Silva; Tiago Maritan Ugulino de Araújo; Leonardo Dantas; Yúrika Sato Nóbrega; Hozana Raquel Gomes De Lima; Guido Lemos de Souza Filho
Deaf communicate naturally through gestural and visual languages called sign languages. These languages are natural, composed by lexical items called signs and have their own vocabulary and grammar. In this paper, we propose the definition of a formal, expressive and consistent language to describe signs in Brazilian Sign Language (LIBRAS). This language allows the definition of all parameters of a sign and consequently the generation of an animation for this sign. In addition, the proposed language is flexible in the sense that new parameters (or phonemes) can be defined “on the fly”. In order to provide a case study for the proposed language, a system for collaborative construction of a LIBRAS vocabulary based on 3D humanoids avatars was also developed. Some tests with Brazilian deaf users were also performed to evaluate the proposal.
international symposium on multimedia | 2011
Felipe Silva Ferreira; Felipe Hermínio Lemos; Gutenberg Pessoa Botelho Neto; Tiago Maritan Ugulino de Araújo; Guido Lemos de Souza Filho
Sign languages are natural languages used by deaf to communicate. Currently, the use of sign language on TV is still limited to manual devices, where a window with a sign language interpreter is shown into the original video program. Some related works, such as Amorim et al. [13] and Araujo et al [14], proposed solutions for this problem, but there are some gaps to be addressed. This paper proposes a solution to provide support for sign language in middlewares compatible with ITU J.202 specification [18]. An important feature of this solution is that it is not necessary to adapt or create new APIs (Application Programming Interface) to provide support for sign languages. A case study was developed to validate this solution, implemented using Ginga-J (procedural part of Ginga middleware), a middleware compliant with ITU J.202. Tests with deaf people confirm the feasibility of the proposed solution.
brazilian symposium on multimedia and the web | 2014
Leonardo Araújo Domingues; Felipe Silva Ferreira; Tiago Maritan Ugulino de Araújo; Manoel Gomes da Silva Neto; Lucenildo Lins Aquino Júnior; Guido Lemos de Souza Filho; Felipe Hermínio Lemos
Deaf people face many problems to execute their daily activities. The main reasons to explain this include barriers for both access information as well as communicating with people without disabilities. In this context, the main goal of this paper is to identify the main problems faced by deaf people to access information in movie theaters and to propose a solution to better address their requirements. In this context, it was developed a computational system that is able to automatically generate and distribute accessible video tracks in Brazilian Sign Language (Língua Brasileira de Sinais - LIBRAS) in cinema rooms. This solution uses mobile devices as secondary screens, in a way that deaf people can have access to the content presented in their natural way of communication. Finally, experiments were performed with groups of Brazilian deaf in order to ensure the viability of the proposed solution and the data collected are analyzed and discussed.
brazilian symposium on multimedia and the web | 2016
Leonardo Araújo Domingues; Virgínia P. Campos; Tiago Maritan Ugulino de Araújo; Guido Lemos de Souza Filho
Technological advances in digital cinema have allowed people to encounter experiences that awaken their imagination and expose them to other realities. Experiencing these realties can be more difficult for the blind or visually impaired, however. In our cinema rooms, visual impairments create barriers that can restrict a persons access to critical information. Therefore, we propose a solution that attempts to eliminate these barriers by using a computational system that is able to automatically generate and distribute accessible audio tracks that describe the digital cinema experience. Using mobile devices to provide the content, visually impaired participants were given the opportunity to partake in an experiment to confirm or reject the viability of the solution presented in this article. The results of the experiment demonstrated that our computational system may be a feasible solution.
advanced video and signal based surveillance | 2017
Virgínia P. Campos; Luiz M. G. Gonçalves; Tiago Maritan Ugulino de Araújo
Use of surveillance cameras as a monitoring tool for home environments, elderly, and children has becoming a common practice. However, people with visual impairments have difficulties in using this kind of device because it relies only on visual information. Towards solving this problem, this work aims to propose a solution that combines deep learning techniques for object recognition in the video and the use of the accessibility resource called audio description in order to produce a narrative of the detected information. The result is a surveillance system based on the narratives of video objects that provides useful contextual information about the environment for visually impaired people. Experiments with demos are given to verify and validate such a system.
brazilian symposium on multimedia and the web | 2018
Angelina Sthephanny da Silva Sales; Luana Vetter Reis; Tiago Maritan Ugulino de Araújo; Yuska Paola Costa Aguiar
Assistive Technology (AT) resources enable people with disabilities to have access to products, services and information, favoring inclusion in various spheres of society. To enhance the autonomy of people with disabilities in the use of such resources, it is important to conduct evaluations of these to ensure that they are in accordance with the needs and expectations of the target audience. It is common for user-based assessments to be based on questionnaires, interviews, or focus groups. However, these traditional strategies are inadequate when the users participating in the evaluation of the AT resource are deaf, not fluent in Portuguese. In order to guarantee the autonomy of the deaf user in the process of evaluating AT resources assigned to them, a solution of a multimedia and online questionnaire adapted to the Brazilian Sign Language (LIBRAS) was elaborated from interactions with teachers/interpreters of LIBRAS and people Deaf TUTAForm comprises questions for collecting data about the users profile, their level of satisfaction and emotional state after the use of the resource under evaluation. In order to verify the suitability of TUTAForm with its potential users, experiments were carried out to observe the interaction of a two groups of 5 deaf participants in the use of the fork game in datilology and use VLibras-Mobile, followed by the application of TUTAForm. As a result, a good understanding of the contents was perceived, however suggestions were made to improve the representation of some options of answers, due to the confusion to understand the scales used to define levels of schooling and deafness, for example.