Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Felix G. Hamza-Lup is active.

Publication


Featured researches published by Felix G. Hamza-Lup.


Presence: Teleoperators & Virtual Environments | 2005

Development of head-mounted projection displays for distributed, collaborative, augmented reality applications

Jannick P. Rolland; Frank A. Biocca; Felix G. Hamza-Lup; Yonggang Ha; Ricardo Martins

Distributed systems technologies supporting 3D visualization and social collaboration will be increasing in frequency and type over time. An emerging type of head-mounted display referred to as the head-mounted projection display (HMPD) was recently developed that only requires ultralight optics (i.e., less than 8 g per eye) that enables immersive multiuser, mobile augmented reality 3D visualization, as well as remote 3D collaborations. In this paper a review of the development of lightweight HMPD technology is provided, together with insight into what makes this technology timely and so unique. Two novel emerging HMPD-based technologies are then described: a teleportal HMPD (T-HMPD) enabling face-to-face communication and visualization of shared 3D virtual objects, and a mobile HMPD (M-HMPD) designed for outdoor wearable visualization and communication. Finally, the use of HMPD in medical visualization and training, as well as in infospaces, two applications developed in the ODA and MIND labs respectively, are discussed.


IEEE Computer Graphics and Applications | 2003

Enabling a continuum of virtual environment experiences

Larry Davis; Jannick P. Rolland; Felix G. Hamza-Lup; Yonggang Ha; Jack Norfleet; Celina Imielinska

We define a virtual environment as a set of surroundings that appear to a user through computer-generated sensory stimuli. The level of immersion-or sense of being in another world-that a user experiences within a VE relates to how much stimuli the computer delivers to the user. Thus, one can classify VEs along a virtuality continuum, which ranges from the real world to an entirely computer-generated environment. We present a technology that allows seamless transitions between levels of immersion in VEs. Milgram and Kishino (1994) first proposed the concept of a virtuality continuum in the context of visual displays. The concept of a virtuality continuum extends to multimodal VEs, which combine multiple sensory stimuli, including 3D sound and haptic capability, leading to a multidimensional virtuality continuum. Emerging applications will benefit from multiple levels of immersion, requiring innovative multimodal technologies and the ability to traverse the multidimensional virtuality continuum.


Proceedings of the AMI-ARCS 2004 Workshop | 2004

Physically-based Deformation of High-Resolution 3D Lung Models for Augmented Reality based Medical Visualization

Anand P. Santhanam; Cali M. Fidopiastis; Felix G. Hamza-Lup; Jannick P. Rolland; Celina Imielinska

Visualization tools using Augmented Reality Environments are effective in applications related to medical training, prognosis and expert interaction. Such medical visualization tools can also provide key visual insights on the physiology of deformable anatomical organs (e.g. lungs). In this paper we propose a deformation method that facilitates physically-based elastostatic deformations of 3D highresolution polygonal models. The implementation of the deformation method as a pre-computation approach is shown for a 3D high-resolution lung model. The deformation is represented as an integration of the applied force and the local elastic property assigned to the 3D lung model. The proposed deformation method shows faster convergence to equilibrium as compared to other physically-based simulation methods. The proposed method also accounts for the anisotropic tissue elastic properties. The transfer functions are formulated in such a way that they overcome stiffness effects during deformations.


symposium on haptic interfaces for virtual environment and teleoperator systems | 2008

Feel the Pressure: E-learning Systems with Haptic Feedback

Felix G. Hamza-Lup; Michele Adams

Involving students in the learning process has been a challenge for educators for many years. Multimodal virtual environments where visual, auditory and haptic stimulus is present may convey information in a more naturalistic fashion since the user manipulates and experiences the environment through multiple sensory channels. We present a novel E-learning system that incorporates a multimodal haptic simulator. The haptic simulator facilitates student understanding of difficult concepts (e.g. physics concepts) and has the potential to augment or replace traditional laboratory instruction with an interactive interface offering enhanced motivation, retention and intellectual stimulation.


international conference of the ieee engineering in medicine and biology society | 2007

Distributed Augmented Reality With 3-D Lung Dynamics—A Planning Tool Concept

Felix G. Hamza-Lup; Anand P. Santhanam; Celina Imielinska; Sanford L. Meeks; Jannick P. Rolland

Augmented reality (AR) systems add visual information to the world by using advanced display techniques. The advances in miniaturization and reduced hardware costs make some of these systems feasible for applications in a wide set of fields. We present a potential component of the cyber infrastructure for the operating room of the future: a distributed AR-based software-hardware system that allows real-time visualization of three-dimensional (3-D) lung dynamics superimposed directly on the patients body. Several emergency events (e.g., closed and tension pneumothorax) and surgical procedures related to lung (e.g., lung transplantation, lung volume reduction surgery, surgical treatment of lung infections, lung cancer surgery) could benefit from the proposed prototype


medicine meets virtual reality | 2003

Development of a training tool for endotracheal intubation: distributed augmented reality.

Jannick P. Rolland; Larry Davis; Felix G. Hamza-Lup; Jason Daly; Yonggang Ha; Glenn A. Martin; Jack Norfleet; Richard Thumann; Celina Imielinska

The authors introduce a tool referred to as the Ultimate Intubation Head (UIH) to train medical practitioners hand-eye coordination in performing endotracheal intubation with the help of augmented reality methods. In this paper we describe the integration of a deployable UIH and present methods for augmented reality registration of real and virtual anatomical models. The assessment of the 52 degrees field of view optics of the custom-designed and built head-mounted display is less than 1.5 arc minutes in the amount of blur and astigmatism, the two limiting optical aberrations. Distortion is less than 2.5%. Preliminary results of the registration of a physical phantom mandible on its virtual counterpart yields less than 3mm rms. in registration. Finally we describe an approach to distributed visualization where a given training procedure may be visualized and shared at various remote locations. Basic assessments of delays within two scenarios of data distribution were conducted and reported.


international conference of the ieee engineering in medicine and biology society | 2007

Simulating 3-D Lung Dynamics Using a Programmable Graphics Processing Unit

Anand P. Santhanam; Felix G. Hamza-Lup; Jannick P. Rolland

Medical simulations of lung dynamics promise to be effective tools for teaching and training clinical and surgical procedures related to lungs. Their effectiveness may be greatly enhanced when visualized in an augmented reality (AR) environment. However, the computational requirements of AR environments limit the availability of the central processing unit (CPU) for the lung dynamics simulation for different breathing conditions. In this paper, we present a method for computing lung deformations in real time by taking advantage of the programmable graphics processing unit (GPU). This will save the CPU time for other AR-associated tasks such as tracking, communication, and interaction management. An approach for the simulations of the three-dimensional (3-D) lung dynamics using Greens formulation in the case of upright position is taken into consideration. We extend this approach to other orientations as well as the subsequent changes in breathing. Specifically, the proposed extension presents a computational optimization and its implementation in a GPU. Results show that the computational requirements for simulating the deformation of a 3-D lung model are significantly reduced for point-based rendering.


computer assisted radiology and surgery | 2008

Online external beam radiation treatment simulator

Felix G. Hamza-Lup; Ivan Sopin; O Zeidan

Radiation therapy is an effective and widely accepted form of treatment for many types of cancer that requires extensive computerized planning. Unfortunately, current treatment planning systems have limited or no visual aid that combines patient volumetric models extracted from patient-specific CT data with the treatment device geometry in a 3D interactive simulation. We illustrate the potential of 3D simulation in radiation therapy with a web-based interactive system that combines novel standards and technologies. We discuss related research efforts in this area and present in detail several components of the simulator. An objective assessment of the accuracy of the simulator and a usability study prove the potential of such a system for simulation and training.


international symposium on mixed and augmented reality | 2004

A method for designing marker-based tracking probes

Larry S. Davis; Felix G. Hamza-Lup; Jannick P. Rolland

Many tracking systems utilize collections of fiducial markers arranged in rigid configurations, called tracking probes, to determine the pose of objects within an environment. In this paper, we present a technique for designing tracking probes called the viewpoints algorithm. The algorithm is generally applicable to tracking systems that use at least three fiduciary marks to determine the pose of an object. The algorithm is used to create an integrated, head-mounted display tracking probe. The predicted accuracy of this probe was 0.032 /spl plusmn/ 0.02 degrees in orientation and 0.09 /spl plusmn/ 0.07 mm in position. The measured accuracy of the probe was 0.028 /spl plusmn/ 0.01 degrees in orientation and 0.11 /spl plusmn/ 0.01 mm in position. These results translate to a predicted, static positional overlay error of a virtual object presented at 1m of less than 0.5 mm. The algorithm is part of a larger framework for designing tracking probes based upon performance goals and environmental constraints.


collaborative virtual environments | 2004

Scene synchronization for real-time interaction in distributed mixed reality and virtual reality environments

Felix G. Hamza-Lup; Jannick P. Rolland

Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. One of the challenges in networked virtual environments is maintaining a consistent view of the shared state in the presence of inevitable network latency and jitter. A consistent view in a shared scene may significantly increase the sense of presence among participants and facilitate their interactivity. The dynamic shared state is directly affected by the frequency of actions applied on the objects in the scene. Mixed Reality (MR) and Virtual Reality (VR) environments contain several types of action producers including human users, a wide range of electronic motion sensors, and haptic devices. In this paper, we propose a novel criterion for categorization of distributed MR/VR systems and present an adaptive synchronization algorithm for distributed MR/VR collaborative environments. In spite of significant network latency, results show that for low levels of update frequencies the dynamic shared state can be kept consistent at multiple remotely located sites.

Collaboration


Dive into the Felix G. Hamza-Lup's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivan Sopin

Armstrong State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Larry Davis

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

O Zeidan

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar

Yonggang Ha

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cali M. Fidopiastis

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge