Marcio Cabral
University of São Paulo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marcio Cabral.
latin american conference on human computer interaction | 2005
Marcio Cabral; Carlos Hitoshi Morimoto; Marcelo Knörich Zuffo
This paper discusses several usability issues related to the use of gestures as an input mode in multimodal interfaces. The use of gestures has been suggested before as a natural solution for applications that require hands-free and notouch interaction with computers, such as in virtual reality (VR) environments. We introduce a simple but robust 2D computer vision based gesture recognition system that was successfully used for interaction in VR environments such as CAVEs and Powerwalls. This interface was tested under 3 different scenarios, as a regular pointing device in a GUI interface, as a navigation tool, and as a visualization tool. Our experiments show that the time to completion of simple pointing tasks is considerably slower when compared to a mouse and that its use during even short periods of time causes fatigue. Despite, these drawbacks, the use of gestures as an alternative mode in multimodal interfaces offers several advantages, such as quick access to computing resources that might be embedded in the environment, using a natural and intuitive way, and that scales nicely to group and collaborative applications, where gestures can be used sporadically.
international conference on 3d web technology | 2007
Marcio Cabral; Marcelo Knörich Zuffo; Silvia Ghirotti; Olavo Belloc; Leonardo Nomura; Mario Nagamura; Fernanda Andrade; Regis Rossi Alves Faria; Leandro Ferraz
In this paper we present our experience in using Virtual Reality Technologies to accurately reconstruct and further explore ancient and historic city buildings. Virtual reality techniques provide a powerful set of tools to explore and access the history of a city. In order to explore, visualize and hear such history, we divided the process in three phases: historical data gathering and analysis; 3D reconstruction and modeling; interactive immersive visualization, auralization and display. The set of guidelines devised helped to put into practice the extensible tools available in VR but not always easy to put together by inexperienced users. These guidelines also helped the smoothness of our work and helped avoiding problems in the subsequent phases. Most importantly, the X3D standard provided an environment capable of helping the design and validation process as well as the visualization phase. To finalize, we present the results achieved and further analyze the extensibility of the framework. Although VR tools and techniques are widely available at present, there is still a gap between using the tools and really taking advantage of VR in historic architectural reconstruction so that users might immerse themselves into this world and thus be able to consider various scenarios and possibilities that might lead to new insightful inspiration. This is an ongoing process that we think will increase and help current architectural development.
international conference on computer graphics and interactive techniques | 2014
Fátima Ferreira; Marcio Cabral; Olavo Belloc; Gregor Miller; Celso Setsuo Kurashima; R. de Deus Lopes; Ian Stavness; Junia Coutinho Anacleto; Marcelo Knörich Zuffo; Sidney S. Fels
We constructed a personal, spherical, multi-projector perspective-corrected rear-projected display called Spheree. Spheree uses multiple calibrated pico-projectors inside a spherical display with content rendered from a user-centric viewpoint. Spheree uses optical tracking for head-coupled rendering, providing parallax-based 3D depth cues. Spheree is compact, supporting direct interaction techniques. For example, 3D models can be modified via 3D interactions on the sphere, providing a 3D sculpture experience.
2012 14th Symposium on Virtual and Augmented Reality | 2012
Fernando Teubl; Celso Setsuo Kurashima; Marcio Cabral; Marcelo Knörich Zuffo
Multi-projector systems offer both higher resolution and brightness by using a cluster of projectors, and it can provide better visual quality when compared to traditional systems using a single high performance projector. When we consider the high cost associated with high-end projectors, the use of multiple low cost projectors can reduce considerably the cost of such installation. This article presents the research and development of a scalable multi-projection system that enables the construction of virtual reality systems with a large number of projectors and graphics computers, and that is capable of achieving a high resolution display. We demonstrate the viability of such system with the development of a camera-based multi-projector system called FastFusion, which automatically calibrates casually aligned projectors to properly blend different projections. Our system software improves known algorithms in the literature for projector calibration and image blending. The main improvement is a more efficient distribution of the calibration process. In addition, since our library proposes a new architecture that is able to manage many projectors, it may lead to the development of Immersion Systems with retina resolution. FastFusion has been tested and validated by virtual reality applications. In this work, we analyze the visual performance of FastFusion in a CAVE system with three walls, eighteen projectors and nine computers.
symposium on 3d user interfaces | 2014
Marcio Cabral; Andre Montes; Olavo Belloc; Rodrigo B. D. Ferraz; Fernando Teubl; Fabio Doreto; Roseli de Deus Lopes; Marcelo Knörich Zuffo
This paper presents our solution to the 3DUI 2014 Contest which is about selection and annotation of 3D point cloud data. This challenge is a classic problem that, if solved and implemented correctly, can be conveniently useful in a wide range of 3D virtual reality applications and environments. Our approach considers a robust, simple and intuitive solution based on bi-manual interaction gestures. We provide a first-person navigation mode for data exploration, point selection and annotation offering a straightforward and intuitive approach for navigation using ones hands. Using a bi-manual gesture interface, the user is able to perform simple but powerful gestures to navigate through the 3D point cloud data within a 3D modelling tool to explore, select and/or annotate points. The implementation is based on COTS (Commercial OFF the Shelf Systems). For modelling and annotation purposes we adopted a widely available open-source tool for 3D editing called Blender. For gesture recognition we adopted the low cost Leap Motion desktop system. We also performed an informal user study that showed the intuitiveness of our solution: users were able to use our system fairly easily, with a fast learning curve.
3dtv-conference: the true vision - capture, transmission and display of 3d video | 2014
F. Teubl; C. S. Kurashima; Marcio Cabral; Roseli de Deus Lopes; J. C. Anacleto; Marcelo Knörich Zuffo; Sidney S. Fels
We describe Spheree, a novel, interactive spherical, perspective-corrected 3D display useful for 3D animations and 3D image-based content. We use an array of pico projectors mounted under the spherical display that are calibrated and blended using an image-fusion library to achieve a single continuous image on the spherical screen area without the use of special lenses or mirrors. By tracking the users viewpoint, we render the correct perspective to achieve a 3D view. This approach enables the viewer to look around the entire sphere and receive both horizontal and vertical motion parallax for a full 3D view. Spheree can be used to show graphics-based contents such as 3D computer graphics animation and 3D image-based rendering applications. By providing a 3D plugin library, Spheree can easily be a second screen to computer animation and modelling software. Spheree is scalable from small, handheld display to large, walk-around experiences. Spheree supports direct interaction techniques for an exciting participant experience. The advancees of our 3D display are the fidelity of the experience and robustness of the displays.
symposium on 3d user interfaces | 2012
Marcio Cabral; Gabriel Roque; Douglas Fonseca dos Santos; Luiz Paulucci; Marcelo Knörich Zuffo
This paper presents our solution to solve the challenge presented at the 3DUI 2012 Contest. The challenge consisted of assisted collaborative navigation in a 3D environment. One user (called power user) has a global view of the 3D environment; alternatively, the other user (called explorer) has a first person view of the environment. The power user uses co-located gestures to manipulate and explore the VR Environment in a power wall like display. These gestures allow the power user to rotate, translate, zoom and place cues for the explorer within the environment, effectively aiding him. The explorer navigates the scene by means of a point and go metaphor: he points to where he wants to go using an optically tracked Wiimote controller; a virtual line connecting the Wiimote in his hand and the target surface is rendered in the VR Environment. We have chosen to use point light sources as cues to the explorer. These point lights show the explorer which direction he should follow to complete the scavenger hunt through the VR Environment. We performed an informal user study showed that users enjoyed completing the given task, despite the initial learning curve for the power user role.
international conference on 3d web technology | 2012
Olavo Belloc; Rodrigo B. D. Ferraz; Marcio Cabral; Roseli de Deus Lopes; Marcelo Knörich Zuffo
Virtual Reality procedure training simulators require that users perform actions in a specific order during the procedure. Recent advances in 3D Web technologies, such as Web3D, HTML5 and WebGL, have allowed complex 3D scenes to be rendered interactively, within a web browser. However, designing and implementing a complete VR procedure training application using these standards is complex and not straightforward. In this paper we present X3D nodes that overcome these limitations and allow procedure training scenarios to be described within X3D. Moreover, using X3DOM to implement these nodes allows training scenarios to run within any standard web browser, without the need to install additional plugins. In particular, we propose three new nodes. These nodes define additional layers that allow relationships and dependencies between geometric entities to be created. They provide the necessary means to develop VR procedure training simulators using X3D. Finally, we have developed a training application targeted at maintenance of hydro-electrical power plants using Petri-nets to validate our proposal.
symposium on 3d user interfaces | 2015
Marcio Cabral; Andre Montes; Gabriel Roque; Olavo Belloc; Mario Nagamura; Regis Rossi Alves Faria; Fernando Teubl; Celso Setsuo Kurashima; Roseli de Deus Lopes; Marcelo Knörich Zuffo
Inspired by principles for designing musical instruments we implemented a new 3D virtual instrument with a particular mapping of touchable virtual spheres to notes and chords of a given musical scale. The objects are metaphors for note keys organized in multiple lines, forming a playable spatial instrument where the player can perform certain sequences of notes and chords across the scale employing short gestures that minimizes jump distances. The idea of different arrangements for notes over the playable space is not new, being pursued on alternative keyboards for instance. This implementation employed an Oculus Rift and a Razer Hydra for gesture input and showed that customization of instrumental mappings using 3D tools can contribute to ease the performance of complex songs by allowing fast execution of specific note combinations.
international conference on computer graphics and interactive techniques | 2016
Eduardo Zilles Borba; Marcio Cabral; Roseli de Deus Lopes; Marcelo Knörich Zuffo; Regis Kopper
This work presents a fully immersive virtual environment that simulates the Brazilian archaeological site of Itapeva, in Sao Paulo. To create a virtual model relevant for archaeological research it was developed a 3D realistic experience. All the data from the physical space were collected with technological equipments (laser scanner and image-based modeling). To provide an immersive feeling when exploring the virtual reality the user was allowed to visualize the aesthetics elements through a head-mounted display and to navigate using 3D input devices. Through a sophisticated simulation the illusion of presence in the archaeological site was stimulated in the user who could explore its landscapes in a non-destructive way.