Daniel Mendes
University of Lisbon
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Mendes.
symposium on 3d user interfaces | 2014
Daniel Mendes; Fernando Fonseca; Bruno Rodrigues De Araújo; Alfredo Ferreira; Joaquim A. Jorge
Stereoscopic tabletops offer unique visualization capabilities, enabling users to perceive virtual objects as if they were lying above the surface. While allowing virtual objects to coexist with user actions in the physical world, interaction with these virtual objects above the surface presents interesting challenges. In this paper, we aim to understand which approaches to 3D virtual object manipulations are suited to this scenario. To this end, we implemented five different techniques based on the literature. Four are mid-air techniques, while the remainder relies on multi-touch gestures, which act as a baseline. Our setup combines affordable non-intrusive tracking technologies with a multi-touch stereo tabletop, providing head and hands tracking, to improve both depth perception and seamless interactions above the table. We conducted a user evaluation to find out which technique appealed most to participants. Results suggest that mid-air interactions, combining direct manipulation with six degrees of freedom for the dominant hand, are both more satisfying and efficient than the alternatives tested.
sketch based interfaces and modeling | 2011
Pedro Lopes; Daniel Mendes; Bruno Rodrigues De Araújo; Joaquim A. Jorge
Multitouch enabled surfaces can bring advantages to modelling scenarios, in particular if bimanual and pen input can be combined. In this work, we assess the suitability of multitouch interfaces to 3D sketching tasks. We developed a multitouch enabled version of ShapeShop, whereby bimanual gestures allow users to explore the canvas through camera operations while using a pen to sketch. This provides a comfortable setting familiar to most users. Our contribution focuses on comparing the combined approach (bimanual and pen) to the pen-only interface for similar tasks. We conducted the evaluation helped by ten sketching experts who exercised both techniques. Results show that our approach both simplifies workflow and lowers task times, when compared to the pen-only interface, which is what most current sketching applications provide.
virtual reality software and technology | 2016
Daniel Medeiros; Eduardo Cordeiro; Daniel Mendes; Maurício Sousa; Alberto Barbosa Raposo; Alfredo Ferreira; Joaquim A. Jorge
Travel on Virtual Environments is the simple action where a user moves from a starting point A to a target point B. Choosing an incorrect type of technique could compromise the Virtual Reality experience and cause side effects such as spatial disorientation, fatigue and cybersickness. The design of effective travelling techniques demands to be as natural as possible, thus real walking techniques presents better results, despite their physical limitations. Approaches to surpass these limitations employ techniques that provide an indirect travel metaphor such as point-steering and target-based. In fact, target-based techniques evince a reduction in fatigue and cybersickness against the point-steering techniques, even though providing less control. In this paper we investigate further effects of speed and transition on target-based techniques on factors such as comfort and cybersickness using a Head-Mounted Display setup.
advances in computer entertainment technology | 2011
Daniel Mendes; Pedro Moniz Lopes; Alfredo Ferreira
Presently, multi-touch interactive surfaces have widespread adoption as entertainment devices. Taking advantage of such devices, we present an interactive LEGO application, developed accordingly to an adaptation of building block metaphors and direct multi-touch manipulation. Our solution (LTouchIt) allows users to create 3D models on a tabletop surface. To prove the validity of our approach, we compared LTouchIt with two LEGO applications, conducting a user study with 20 participants. The results suggest that our touch-based application can compete with existing mouse-based applications. It provides users with a hands-on experience, which we believe to be more adequate for entertainment purposes.
virtual reality software and technology | 2016
Daniel Mendes; Filipe Relvas; Alfredo Ferreira; Joaquim A. Jorge
Object manipulation is a key feature in almost every virtual environment. However, it is difficult to accurately place an object in immersive virtual environments using mid-air gestures that mimic interactions in the physical world, although being a direct and natural approach. Previous research studied mouse and touch based interfaces concluding that separation of degrees-of-freedom (DOF) led to improved results. In this paper, we present the first user evaluation to assess the impact of explicit 6 DOF separation in mid-air manipulation tasks. We implemented a technique based on familiar virtual widgets that allow single DOF control, and compared it against a direct approach and PRISM, which dynamically adjusts the ratio between hand and object motions. Our results suggest that full DOF separation benefits precision in spatial manipulations, at the cost of additional time for complex tasks. From our results we draw guidelines for 3D object manipulation in mid-air.
human factors in computing systems | 2017
Maurício Sousa; Daniel Mendes; Soraia Figueiredo Paulo; Nuno Matela; Joaquim A. Jorge; Daniel Simões Lopes
Reading room conditions such as illumination, ambient light, human factors and display luminance, play an important role on how radiologists analyze and interpret images. Indeed, serious diagnostic errors can appear when observing images through everyday monitors. Typically, these occur whenever professionals are ill-positioned with respect to the display or visualize images under improper light and luminance conditions. In this work, we show that virtual reality can assist radiodiagnostics by considerably diminishing or cancel out the effects of unsuitable ambient conditions. Our approach combines immersive head-mounted displays with interactive surfaces to support professional radiologists in analyzing medical images and formulating diagnostics. We evaluated our prototype with two senior medical doctors and four seasoned radiology fellows. Results indicate that our approach constitutes a viable, flexible, portable and cost-efficient option to traditional radiology reading rooms.
Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces | 2017
Maurício Sousa; Daniel Mendes; Rafael Kuffner dos Anjos; Daniel Medeiros; Alfredo Ferreira; Alberto Barbosa Raposo; João Madeiras Pereira; Joaquim A. Jorge
Context-aware pervasive applications can improve user experiences by tracking people in their surroundings. Such systems use multiple sensors to gather information regarding people and devices. However, when developing novel user experiences, researchers are left to building foundation code to support multiple network-connected sensors, a major hurdle to rapidly developing and testing new ideas. We introduce Creepy Tracker, an open-source toolkit to ease prototyping with multiple commodity depth cameras. It automatically selects the best sensor to follow each person, handling occlusions and maximizing interaction space, while providing full-body tracking in scalable and extensible manners. It also keeps position and orientation of stationary interactive surfaces while offering continuously updated point-cloud user representations combining both depth and color data. Our performance evaluation shows that, although slightly less precise than marker-based optical systems, Creepy Tracker provides reliable multi-joint tracking without any wearable markers or special devices. Furthermore, implemented representative scenarios show that Creepy Tracker is well suited for deploying spatial and context-aware interactive experiences.
Computers & Graphics | 2017
Daniel Mendes; Daniel Medeiros; Maurício Sousa; Eduardo Cordeiro; Alfredo Ferreira; Joaquim A. Jorge
Abstract In interactive systems, the ability to select virtual objects is essential. In immersive virtual environments, object selection is usually done at arm’s length in mid-air by directly intersecting the desired object with the user’s hand. However, selecting objects outside user’s arm-reach still poses significant challenges, which direct approaches fail to address. Techniques proposed to overcome such limitations often follow an arm-extension metaphor or favor selection volumes combined with ray-casting. Nonetheless, while these approaches work for room sized environments, they hardly scale up to larger scenarios with many objects. In this paper, we introduce a new taxonomy to classify existing selection techniques. In its wake, we propose PRECIOUS, a novel mid-air technique for selecting out-of-reach objects, featuring iterative refinement in Virtual Reality, an hitherto untried approach in this context. While comparable techniques have been developed for non-stereo and non-immersive environments, these are not suitable to Immersive Virtual Reality. Our technique is the first to employ an iterative progressive refinement in such settings. It uses cone-casting to select multiple objects and moves the user closer to them in each refinement step, to allow accurate selection of the desired target. A user evaluation showed that PRECIOUS compares favorably against state-of-the-art approaches. Indeed, our results indicate that PRECIOUS is a versatile approach to out-of-reach target acquisition, combining accurate selection with consistent task completion times across different scenarios.
international conference on artificial reality and telexistence | 2014
João Guerreiro; Daniel Medeiros; Daniel Mendes; Maurício Sousa; Joaquim A. Jorge; Alberto Barbosa Raposo; Ismael H. F. dos Santos
Globalization has transformed engineering design into a world-wide endeavor pursued by geographically distributed specialist teams. Widespread adoption of VR for design and the need to act and place marks directly on the objects of discussion in design reviewing tasks led to research on annotations in virtual collaborative environments. However, conventional approaches have yet to progress beyond the yellow postit + text metaphor. Indeed, multimedia such as audio, sketches, video and animations afford greater expressiveness which could be put to good use in collaborative environments. Furthermore, individual annotations fail to capture both the rationale and flow of discussion which are key to understanding project design decisions. One exemplar instance is offshore engineering projects that normally engage geographically distributed highly-specialized engineering teams and require both improved productivity, due to project costs and the need to reducing risks when reviewing designs of deep-water oil & gas platforms. In this paper, we present an approach to rich, structured multimedia annotations to support the discussion and decision making in design reviewing tasks. Furthermore, our approach supports issue-based argumentation to reveal provenance of design decisions to better support the workflow in engineering projects. While this is an initial exploration of the solution space, examples show greater support of collaborative design review over traditional approaches.
virtual reality software and technology | 2016
Daniel Medeiros; Maurício Sousa; Daniel Mendes; Alberto Barbosa Raposo; Joaquim A. Jorge
Head-Mounted Displays (HMDs) and similar 3D visualization devices are becoming ubiquitous. Going a step forward, HMD see-through systems bring virtual objects to real world settings, allowing augmented reality to be used in complex engineering scenarios. Of these, optical and video see-through systems differ on how the real world is captured by the device. To provide a seamless integration of real and virtual imagery, the absolute depth and size of both virtual and real objects should match appropriately. However, these technologies are still in their early stages, each featuring different strengths and weaknesses which affect the user experience. In this work we compare optical to video see-through systems, focusing on depth perception via exocentric and egocentric methods. Our study pairs Meta Glasses, an off-the-shelf optical see-through, to a modified Oculus Rift setup with attached video-cameras, for video see-through. Results show that, with the current hardware available, the video see-through configuration provides better overall results. These experiments and our results can help interaction designers for both virtual and augmented reality conditions.