Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Medeiros is active.

Publication


Featured researches published by Daniel Medeiros.


virtual reality continuum and its applications in industry | 2013

A tablet-based 3D interaction tool for virtual engineering environments

Daniel Medeiros; Lucas Teixeira; Felipe Gomes de Carvalho; Ismael H. F. dos Santos; Alberto Barbosa Raposo

Three-dimensional computer-aided design (3D CAD) modeling and reviewing is one of the most common engineering project tools. Interaction in these environments is characterized by the need for a high precision level to execute specific tasks. Generally this kind of task uses specific interaction devices with 4 or more degrees of freedom, such as 3D mice. Currently applications involving 3D interaction use interaction devices for object modeling or for the implementation of navigation, selection and manipulation techniques in a virtual environment. A related problem is the need to control naturally non-immersive tasks, such as symbolic input (e.g., text, photos). In addition, the steep learning curve to handle such non-conventional devices is a recurring problem. The addition of sensors and the popularization of smart-phones and tablets, allowed the use of such devices in virtual engineering environments. These devices, differs to other devices by the possibility of including additional information and performing naturally non-immersive tasks. This work presents a 3D interaction tablet-based tool, which allows the aggregation of all major 3D interaction topics, such as navigation, selection, manipulation, system control and symbolic input. To validate the proposed tool, the SimUEP-Ambsim application was chosen, an oil and gas simulator that has the complexity needed and which allows the use of all techniques implemented. Then, the tool was tested in another application, a photo-voltaic solar plant simulator, in order to evaluate the generality of this work concept.


virtual reality software and technology | 2016

Effects of speed and transitions on target-based travel techniques

Daniel Medeiros; Eduardo Cordeiro; Daniel Mendes; Maurício Sousa; Alberto Barbosa Raposo; Alfredo Ferreira; Joaquim A. Jorge

Travel on Virtual Environments is the simple action where a user moves from a starting point A to a target point B. Choosing an incorrect type of technique could compromise the Virtual Reality experience and cause side effects such as spatial disorientation, fatigue and cybersickness. The design of effective travelling techniques demands to be as natural as possible, thus real walking techniques presents better results, despite their physical limitations. Approaches to surpass these limitations employ techniques that provide an indirect travel metaphor such as point-steering and target-based. In fact, target-based techniques evince a reduction in fatigue and cybersickness against the point-steering techniques, even though providing less control. In this paper we investigate further effects of speed and transition on target-based techniques on factors such as comfort and cybersickness using a Head-Mounted Display setup.


intelligent user interfaces | 2016

SleeveAR: Augmented Reality for Rehabilitation using Realtime Feedback

Maurício Sousa; João Vieira; Daniel Medeiros; Artur M. Arsénio; Joaquim A. Jorge

We present an intelligent user interface that allows people to perform rehabilitation exercises by themselves under the offline supervision of a therapist. Every year, many people suffer injuries that require rehabilitation. This entails considerable time overheads since it requires people to perform specified exercises under the direct supervision of a therapist. Therefore it is desirable that patients continue performing exercises outside the clinic (for instance at home, thus without direct supervision), to complement in-clinic physical therapy. However, to perform rehabilitation tasks accurately, patients need appropriate feedback, as otherwise provided by a physical therapist, to ensure that these unsupervised exercises are correctly executed. Different approaches address this problem, providing feedback mechanisms to aid rehabilitation. Unfortunately, test subjects frequently report having trouble to completely understand the feedback thus provided, which makes it hard to correctly execute the prescribed movements. Worse, injuries may occur due to incorrect performance of the prescribed exercises, which severely hinders recovery. SleeveAR is a novel approach to provide real-time, active feedback, using multiple projection surfaces to provide effective visualizations. Empirical evaluation shows the effectiveness of our approach as compared to traditional video-based feedback. Our experimental results show that our intelligent UI can successfully guide subjects through an exercise prescribed (and demonstrated) by a physical therapist, with performance improvements between consecutive executions, a desirable goal to successful rehabilitation.


2012 14th Symposium on Virtual and Augmented Reality | 2012

A Case Study on the Implementation of the 3C Collaboration Model in Virtual Environments

Daniel Medeiros; Eduardo Ribeiro; Peter Dam; Rodrigo Pinheiro; Thiago Motta; Manuel E. Loaiza; Alberto Barbosa Raposo

Throughout the years many studies have explored the potential of Virtual Reality (VR) technologies to support collaborative work. However few studies looked into CSCW (Computer Supported Cooperative Work) collaboration models that could help VR systems improve the support for collaborative tasks. This paper analyzes the applicability of the 3C collaboration model as a methodology to model and define collaborative tools in the development of a collaborative virtual reality application. A case study will be presented to illustrate the selection and evaluation of different tools that aim to support the actions of communication, cooperation and coordination between users that interact in a virtual environment. The main objective of this research is to show that the criteria defined by the 3C model can be mapped as a parameter for the classification of interactive tools used in the development of collaborative virtual environments.


Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces | 2017

Creepy Tracker Toolkit for Context-aware Interfaces

Maurício Sousa; Daniel Mendes; Rafael Kuffner dos Anjos; Daniel Medeiros; Alfredo Ferreira; Alberto Barbosa Raposo; João Madeiras Pereira; Joaquim A. Jorge

Context-aware pervasive applications can improve user experiences by tracking people in their surroundings. Such systems use multiple sensors to gather information regarding people and devices. However, when developing novel user experiences, researchers are left to building foundation code to support multiple network-connected sensors, a major hurdle to rapidly developing and testing new ideas. We introduce Creepy Tracker, an open-source toolkit to ease prototyping with multiple commodity depth cameras. It automatically selects the best sensor to follow each person, handling occlusions and maximizing interaction space, while providing full-body tracking in scalable and extensible manners. It also keeps position and orientation of stationary interactive surfaces while offering continuously updated point-cloud user representations combining both depth and color data. Our performance evaluation shows that, although slightly less precise than marker-based optical systems, Creepy Tracker provides reliable multi-joint tracking without any wearable markers or special devices. Furthermore, implemented representative scenarios show that Creepy Tracker is well suited for deploying spatial and context-aware interactive experiences.


Computers & Graphics | 2017

Design and evaluation of a novel out-of-reach selection technique for VR using iterative refinement

Daniel Mendes; Daniel Medeiros; Maurício Sousa; Eduardo Cordeiro; Alfredo Ferreira; Joaquim A. Jorge

Abstract In interactive systems, the ability to select virtual objects is essential. In immersive virtual environments, object selection is usually done at arm’s length in mid-air by directly intersecting the desired object with the user’s hand. However, selecting objects outside user’s arm-reach still poses significant challenges, which direct approaches fail to address. Techniques proposed to overcome such limitations often follow an arm-extension metaphor or favor selection volumes combined with ray-casting. Nonetheless, while these approaches work for room sized environments, they hardly scale up to larger scenarios with many objects. In this paper, we introduce a new taxonomy to classify existing selection techniques. In its wake, we propose PRECIOUS, a novel mid-air technique for selecting out-of-reach objects, featuring iterative refinement in Virtual Reality, an hitherto untried approach in this context. While comparable techniques have been developed for non-stereo and non-immersive environments, these are not suitable to Immersive Virtual Reality. Our technique is the first to employ an iterative progressive refinement in such settings. It uses cone-casting to select multiple objects and moves the user closer to them in each refinement step, to allow accurate selection of the desired target. A user evaluation showed that PRECIOUS compares favorably against state-of-the-art approaches. Indeed, our results indicate that PRECIOUS is a versatile approach to out-of-reach target acquisition, combining accurate selection with consistent task completion times across different scenarios.


international conference on artificial reality and telexistence | 2014

Beyond post-it: structured multimedia annotations for collaborative VEs

João Guerreiro; Daniel Medeiros; Daniel Mendes; Maurício Sousa; Joaquim A. Jorge; Alberto Barbosa Raposo; Ismael H. F. dos Santos

Globalization has transformed engineering design into a world-wide endeavor pursued by geographically distributed specialist teams. Widespread adoption of VR for design and the need to act and place marks directly on the objects of discussion in design reviewing tasks led to research on annotations in virtual collaborative environments. However, conventional approaches have yet to progress beyond the yellow postit + text metaphor. Indeed, multimedia such as audio, sketches, video and animations afford greater expressiveness which could be put to good use in collaborative environments. Furthermore, individual annotations fail to capture both the rationale and flow of discussion which are key to understanding project design decisions. One exemplar instance is offshore engineering projects that normally engage geographically distributed highly-specialized engineering teams and require both improved productivity, due to project costs and the need to reducing risks when reviewing designs of deep-water oil & gas platforms. In this paper, we present an approach to rich, structured multimedia annotations to support the discussion and decision making in design reviewing tasks. Furthermore, our approach supports issue-based argumentation to reveal provenance of design decisions to better support the workflow in engineering projects. While this is an initial exploration of the solution space, examples show greater support of collaborative design review over traditional approaches.


virtual reality software and technology | 2016

Perceiving depth: optical versus video see-through

Daniel Medeiros; Maurício Sousa; Daniel Mendes; Alberto Barbosa Raposo; Joaquim A. Jorge

Head-Mounted Displays (HMDs) and similar 3D visualization devices are becoming ubiquitous. Going a step forward, HMD see-through systems bring virtual objects to real world settings, allowing augmented reality to be used in complex engineering scenarios. Of these, optical and video see-through systems differ on how the real world is captured by the device. To provide a seamless integration of real and virtual imagery, the absolute depth and size of both virtual and real objects should match appropriately. However, these technologies are still in their early stages, each featuring different strengths and weaknesses which affect the user experience. In this work we compare optical to video see-through systems, focusing on depth perception via exocentric and egocentric methods. Our study pairs Meta Glasses, an off-the-shelf optical see-through, to a modified Oculus Rift setup with attached video-cameras, for video see-through. Results show that, with the current hardware available, the video see-through configuration provides better overall results. These experiments and our results can help interaction designers for both virtual and augmented reality conditions.


2013 XV Symposium on Virtual and Augmented Reality | 2013

An Interaction Tool for Immersive Environments Using Mobile Devices

Daniel Medeiros; Felipe Gomes de Carvalho; Alberto Barbosa Raposo; Ismael H. F. dos Santos

Interaction in engineering virtual environments differs by the necessity of the high precision level needed for the execution of specifics tasks for this kind of environment. Generally this kind of task uses specific interaction devices with 4 or more DOF. Current applications involving 3D interaction use interaction devices for object modelling or for the implementation of navigation, selection and manipulation tecniques in a virtual environment. A related problem is the necessity of controlling tasks that are naturally non-immersive, such as symbolic input (e.g., text, photos). Another problem is the large learning curve to handle such non-conventional devices. The addition of sensors and the popularization of smartphones and tablets, allowed the use of such devices in virtual engineering environments. Thes devices, besides their popularity and sensors, differs by the possibility of including additional information and performing naturally non-immersive tasks. This work presents a 3D interaction tablet-based tool, which allows the aggregation of all major 3D interaction topics, such as navigation, selection, manipulation, system control and symbolic input.


OTC Brasil | 2013

A Robotics Framework for Planning the Offshore Robotizing Using Virtual Reality Techniques

Ismael H. F. dos Santos; Gabriel Motta Ribeiro; Fernando Coutinho; Liu Hsu; Alberto Barbosa Raposo; Felipe Gomes de Carvalho; Daniel Medeiros; Mauricio Galassi; Ramon R. Costa; P. Arroyo From; Gustavo M. Freitas; Thiago B. Almeida-Antonio; Fernando Lizarralde

The Oil & Gas industry has seen increasing costs of finding and extracting hydrocarbons, especially in remote locations, ultra-deep water reservoirs (400 m or deeper) or in hostile environments. Those new exploration frontiers have been increasing the production complexity and logistic costs. In such conditions, oil exploration feasibility depends on new technologies to optimize production efficiency. One possible solution to this challenge is to increase the degree of automation in production units. New design concepts also consider the use of robotic devices in such scenarios. In this paper we present a robotics framework, SimUEP-Robotics (Robotics Simulator for Stationary Production Units Unidades Estacionárias de Produção or UEPs, in Portuguese), aimed to enable planning the offshore platform robotizing using virtual reality techniques. The SimUEP-Robotics is based on ROS (Robot Operating System), a middleware for exchanging messages between different devices and processes that cooperate to accomplish a robotics task. SimUEP-Robotics is designed concerning the offshore requirements and is a flexible framework that allows the inclusion of new robots and devices in a virtual operation scenario. This capability enables the robotization and automation of offshore facilities that gradually evolve, starting from a complete virtual scenario towards a complete robotic system operating on a real platform, progressively including real devices. SimUEP-Robotics has an integrated Virtual Reality Engine (VR-Engine) specially tailored to provide realistic visualization of large offshore scene models in an immersive environment. The monitoring and management of remote operations of Stationary Production Units (SPU) is an activity that can also benefit by the usage of virtual reality scenarios due to the potential to reduce the complexity and difficulty in visualizing and validating simulations of operations performed by robots on a real SPU. The framework supports simultaneous simulation of multiple robots equipped with sensors and actuators like cameras, laser range finders and robotic manipulators. SimUEP-Robotics has also some specialized visualization tools like trajectory visualizer, ghostview robot animation, point-to-point measurement and a scenario editor that allows the user customize the target scenario accordingly. Through the use of those visualization tools it is possible, for example, to better understand the quality of the planned robot trajectory and propose new algorithms that can be further evaluated in the virtual environment. In conclusion, we argue that the validation process in an immersive virtual environment reduces risks and costs of real operation tests scenarios. SimUEP-Robotics has also an integrated Robotics-Simulator which is responsible for taking care of task planning and execution based on the information of the virtual scenario provided by the VR-Engine. To illustrate the effectiveness of the framework, different robotics applications were developed. One is an underwater application that calculates the whole dynamics of an operated ROV to simulate and test complex ROV operations in deep waters, like the connection of a flowline to a Christmas tree. The other one represents a topside offshore platform scenario where different virtual robots, derived from real mechanisms like Motoman DIA10, Puma 560, Seekur and others, operates. Results obtained on a pick and place task demonstrate the benefits of the proposed robotics framework for offshore applications.

Collaboration


Dive into the Daniel Medeiros's collaboration.

Top Co-Authors

Avatar

Alberto Barbosa Raposo

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Felipe Gomes de Carvalho

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Eduardo Cordeiro

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Lucas Teixeira

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Top Co-Authors

Avatar

Eduardo Ribeiro

Pontifical Catholic University of Rio de Janeiro

View shared research outputs
Researchain Logo
Decentralizing Knowledge