Daniela Gorski Trevisan
Federal Fluminense University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniela Gorski Trevisan.
Computer Methods and Programs in Biomedicine | 2008
Raphael Olszewski; Marta Becker Villamil; Daniela Gorski Trevisan; Luciana Porcher Nedel; Carla Maria Dal Sasso Freitas; Hervé Reychler; Benoît Macq
Computer-assisted maxillofacial orthognathic surgery is an emerging and interdisciplinary field linking orthognathic surgery, remote signal engineering and three-dimensional (3D) medical imaging. Most of the computational solutions already developed make use of different specialized systems which introduce difficulties both in the information transfer from one stage to the others and in the use of such systems by surgeons. Trying to address such issue, in this work we present a common computer-based system that integrates proposed modules for planning and assisting the maxillofacial surgery. With that we propose to replace the current standard orthognathic preoperative planning, and to bring information from a virtual planning to the real operative field. The system prototype, including three-dimensional cephalometric analysis, static and dynamic virtual orthognathic planning, and mixed reality transfer of information to the operation room, is described and the first results obtained are presented.
ubiquitous computing | 2009
Alexandre Benoit; Laurent Bonnaud; Alice Caplier; Phillipe Ngo; Lionel Lawson; Daniela Gorski Trevisan; Vjekoslav Levacic; Céline Mancas; Guillaume Chanel
This paper presents a driver simulator, which takes into account the information about the user’s state of mind (level of attention, fatigue state, stress state). The user’s state of mind analysis is based on video data and biological signals. Facial movements such as eyes blinking, yawning, head rotations, etc., are detected on video data: they are used in order to evaluate the fatigue and the attention level of the driver. The user’s electrocardiogram and galvanic skin response are recorded and analyzed in order to evaluate the stress level of the driver. A driver simulator software is modified so that the system is able to appropriately react to these critical situations of fatigue and stress: some audio and visual messages are sent to the driver, wheel vibrations are generated and the driver is supposed to react to the alert messages. A multi-threaded system is proposed to support multi-messages sent by the different modalities. Strategies for data fusion and fission are also provided. Some of these components are integrated within the first prototype of OpenInterface: the multimodal similar platform.
2010 Brazilian Symposium on Games and Digital Entertainment | 2010
André Augusto Pereira Brandão; Daniela Gorski Trevisan; Lenisa Brandão; Bruno Moreira; Giancarlo Nascimento; Cristina Nader Vasconcelos; Esteban Clua; Pedro Thiago Mourão
There are not many initiatives in the area of gamedevelopment for children with special needs, specially childrenwith Down syndrome. The major purpose of our research isto promote cognitive development of disabled children in thecontext of inclusive education. In order to do so, we addressaspects of interaction, communication and game design instimulating selected cognitive abilities. By using a Human-Computer Interaction method based on the Inspection ofEvaluation, it was possible to study and understand userinteraction with the interface and thus examine the positiveaspects as well as the communicability problems found withthe JECRIPE game - a game developed specially for childrenwith Down syndrome in pre-scholar age.
CADUI | 2005
Murielle Florins; Daniela Gorski Trevisan; Jean Vanderdonckt
Continuity as usability property has been used in mixed reality systems and in multiplatform systems. This paper compares the definitions that have been given to the concept in both fields. Continuity is then given in a consolidated definition.
international conference on 3d web technology | 2004
Jean Vanderdonckt; Chow Kwok Chieu; Laurent Bouillon; Daniela Gorski Trevisan
A transformational approach is presented that models a virtual reality scene based on Abstract Interaction Objects (AIOs) from which virtual user interfaces can be designed (by progressive AIO assembling), generated (by automated code generation in VRML/X3D), and evaluated (by static analysis of the ongoing model). The underlying model is shared with user interfaces descriptions in other worlds like Windows applications, web pages, etc. It is therefore possible to recuperate existing interfaces in these worlds to automatically create their counterpart in VRML/ X3D. This process is called virtualization of flat user interfaces. This approach allows the progressive and flexible retargeting of non-virtual interfaces in the virtual world to achieve a better integration with other applications.
task models and diagrams for user interface design | 2004
Daniela Gorski Trevisan; Monica Gemo; Jean Vanderdonckt; Benoît Macq
Currently very few techniques are available to support the design of Augmented and Mixed Reality (MR) systems. Task elicitation is more complex for MR systems than for traditional information systems. Having multiple sources of information and two worlds of interaction (real and virtual) involves making choices about what to attend to and when. Interaction based on traditional input and output devices is not effective in a mixed scenario. It distracts the user from the task at hand and may create a severe cognitive seam. Understanding, formalizing and modeling such aspects can help designers to assess interaction at all the stages of development. We are interested in focused applications that require the users hand free for real world tasks and to understand how the users task focus drives designing MR systems. The contribution of this paper is twofold: it first illustrates the specific requirements posed by such systems and then it shows through a study case that there is currently no complete support to model these aspects among the tools commonly employed for task modeling.
Virtual Reality | 2004
Daniela Gorski Trevisan; Jean Vanderdonckt; Benoît Macq
Recent progress in the overlay and registration of digital information on the user’s workspace in a spatially meaningful way has allowed mixed reality (MR) to become a more effective operational medium. However, research in software structures, design methods and design support tools for MR systems is still in its infancy. In this paper, we propose a conceptual classification of the design space to support the development of MR systems. The proposed design space (DeSMiR) is an abstract tool for systematically exploring several design alternatives at an early stage of interaction design, without being biassed towards a particular modality or technology. Once the abstract design possibilities have been identified and a concrete design decision has been taken (i.e. a specific modality has been selected), a concrete MR application can be considered in order to analyse the interaction techniques in terms of continuous interaction properties. We suggest that our design space can be applied to the design of several kinds of MR applications, especially those in which very little user focus distraction can be tolerated, and where smooth connections and interactions between real and virtual worlds is critical for the system development. An image-guided surgery system (IGS) is used as a case study.
Medical Imaging 2003: Visualization, Image-Guided Procedures, and Display | 2003
Daniela Gorski Trevisan; Jean Vanderdonckt; Benoît Macq; Christian Raftopoulos
Compared to conventional interfaces, image guided surgery (IGS) interfaces contain a richer variety and more complex objects and interaction types. The main interactive characteristics emering from systems like this is the interaction focus shared between physical space, where the surgeon interacts with the patient using surgical tools, and with the digital world, where the surgeon interacts with the system. This limitation results in two different interfaces likely inconsistent, thereby the interaction discontinuities do break the natuarl workflow forcing the user to switch between the operation modes. Our work addresses these features by focusing on the model, interaction and ergonomic integrity analysis considering the Augmented Reality paradigm applied to IGS procedures and more specifically applied to the Neurosurgery study case. We followed a methodology according to the model-based approach, including new extensions in order to support interaction technologies and to sensure continuity interaction according to the IGS system requirements. As a result, designers may as soon as possible discover errors in the development process and may perform an efficient interface design coherently integrating constraints favoring continuity instead of discrete interaction with possible inconsistencies.
ieee international conference on serious games and applications for health | 2017
Thiago Malheiros Porcino; Esteban Clua; Daniela Gorski Trevisan; Cristina Nader Vasconcelos; Luis Valente
We are experiencing an upcoming trend of using head mounted display systems in games and serious games, which is likely to become an established practice in the near future. While these systems provide highly immersive experiences, many users have been reporting discomfort symptoms, such as nausea, sickness, and headaches, among others. When using VR for health applications, this is more critical, since the discomfort may interfere a lot in treatments. In this work we discuss possible causes of these issues, and present possible solutions as design guidelines that may mitigate them. In this context, we go deeper within a dynamic focus solution to reduce discomfort in immersive virtual environments, when using first-person navigation. This solution applies an heuristic model of visual attention that works in real time. This work also discusses a case study (as a first-person spatial shooter demo) that applies this solution and the proposed design guidelines.
hawaii international conference on system sciences | 2015
Isys Macedo; Daniela Gorski Trevisan; Cristina Nader Vasconcelos; Esteban Clua
This work proposes a method for evaluating the childrens behavioral interactions with a game, more specifically for evaluating playful applications for kids with cognitive disabilities. Our method introduces an evaluation criteria over childrens behavioral interaction and game design analysis, adapted from a list of breakdown indication types of the Detailed Video Analysis (DEVAN) that was originally designed for regular applications. We present a case study of the proposed evaluation method with a detailed analysis of the game called JECRIPE, originally developed for stimulating cognitive abilities of children with Down syndrome in preschool age. The proposed method adopts qualitative and quantitative criteria to review the initial developmental factors that have driven JECRIPEs design versus the real behavior observed in a group of children playing the game. As results of this case study, we demonstrate the reliability of the evaluation method and the capacity of this method in discovering usability and fun problems in order to be considered and addressed in future game releases.