Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shahzad Rasool is active.

Publication


Featured researches published by Shahzad Rasool.


international conference on computer graphics and interactive techniques | 2011

Tangible images

Shahzad Rasool; Alexei Sourin

Visual and haptic rendering pipelines exist concurrently and compete for computing resources while the refresh rate of haptic rendering is two orders of magnitude higher than that of visual rendering (1000 Hz vs. 30-50Hz). However, in certain cases, 3D visual rendering can be replaced by merely displaying 2D images, thus releasing the resources to image-driven haptic rendering algorithms. A number of approaches have been proposed to provide haptic interaction with 2D images but they suffer from various problems and do not provide a fully believable impression of haptic sensation of the 3D scene displayed in the image. Based on the method of haptic effect generation, the existing approaches can be classified into techniques that use information derived from the image, and techniques that use additional information along with the image to enable haptic interaction. Previously, we proposed our own method of augmenting images with haptic data for using them in shared virtual environments [Rasool and Sourin 2010]. The images were augmented with invisible haptic objects defined by implicit, explicit and parametric mathematical functions using the function-based extension of the Virtual Reality Modeling Language [Wei and Sourin, 2011].


The Visual Computer | 2013

Image-driven virtual simulation of arthroscopy

Shahzad Rasool; Alexei Sourin

In recent years, minimally invasive arthroscopic surgery has replaced a number of conventional open orthopedic surgery procedures on joints. While this achieves a number of advantages for the patient, the surgeons have to learn very different skills, since the surgery is performed with special miniature pencil-like tools and cameras inserted through little incisions while observing the surgical field on video monitor. Therefore, virtual reality simulation becomes an alternative to traditional surgical training based on hundreds years old apprentice–master model that involves either real patients or increasingly difficult to procure cadavers. Normally, 3D simulation of the virtual surgical field requires significant efforts from the software developers but yet remains not always photorealistic. In contrast to this, for photorealistic visualization and haptic interaction with the surgical field we propose to use real arthroscopic images augmented with 3D object models. The proposed technique allows for feeling the joint cavity displayed on video monitor as real 3D objects rather than their images while various surgical procedures, such as menisectomy, are simulated in real time. In the preprocessing stage of the proposed approach, the arthroscopic images are stitched into panoramas and augmented with implicitly defined object models representing deformable menisci. In the simulation loop, depth information from the mixed scene is used for haptic rendering. The scene depth map and visual display are reevaluated only when the scene is modified.


virtual reality continuum and its applications in industry | 2011

Haptic interaction with 2D images

Shahzad Rasool; Alexei Sourin

Visual and haptic rendering pipelines exist concurrently and compete for computing resources while the refresh rate of haptic rendering is two orders of magnitude higher than that of visual rendering (1000 Hz vs. 30-50Hz). However, in many cases, 3D visual rendering can be replaced by merely displaying 2D images, thus releasing the resources to image-driven haptic rendering algorithms. These algorithms provide for haptic texture rendering in vicinity of a touch point, but usually require additional information augmented with the image to provide for haptic perception of geometry of the shapes displayed in images. We propose a framework for making tangible images which allows haptic perception of three features: scene geometry, texture and physical properties. Haptic geometry rendering technique uses depth information, that could be acquired by a multitude of ways for providing haptic interaction with images and videos in real-time. The presented method neither performs 3D reconstruction nor requires for using polygonal models. It is based on direct force calculation and allows for smooth haptic interaction even at object boundaries. We also propose dynamic mapping of haptic workspace in real-time to enable sensation of fine surface details. Alternately, one of the existing shading-based haptic texture rendering methods can be combined with the proposed haptic geometry rendering algorithm to provide believable interaction. Haptic perception of physical properties is achieved by automatic segmentation of an image into haptic regions and interactive assignment of physical properties to them.


cyberworlds | 2010

Towards Tangible Images and Video in Cyberworlds--Function-Based Approach

Shahzad Rasool; Alexei Sourin

Haptic interaction is commonly used with 3D objects defined by their geometric and solid models. Extension of the haptic interaction to 3D Cyber worlds is a challenging task due to the Internet bandwidth constraints and often prohibitive sizes of the models. We study how to replace visual and haptic rendering of shared 3D objects with 2D image visualization and 3D haptic rendering of the forces reconstructed from the images or augmenting them, which will eventually simulate realistic haptic interaction with 3D objects. This approach allows us to redistribute the computing power so that it can concentrate mainly on the tasks of haptic interaction and rendering. We propose how to implement such interaction with small function descriptions of the haptic information augmenting images and video. We illustrate the proposed ideas with the function-based extension of VRML and X3D.


trans. computational science | 2014

Image-Driven Haptic Rendering

Shahzad Rasool; Alexei Sourin

Haptic interaction requires the content creators to make haptic models of the virtual objects while it is not always possible or feasible, especially when it comes to using real images or videos as elements of interaction. We, therefore, propose tangible images and image-driven haptic rendering where a displayed image is used as a source of the force-feedback calculations at any pixel touched by the haptic device. We introduce the main idea and describe how it is implemented as a core algorithm for image-driven haptic rendering, as well as for a few particular cases of haptic rendering emphasizing colors, contours and textures of the objects displayed in the images. Implementations of the proposed method to desktop tangible image application and haptic video communication on the web are presented as a proof of concept.


The Visual Computer | 2016

Real-time haptic interaction with RGBD video streams

Shahzad Rasool; Alexei Sourin

Video interaction is a common way of communication in cyberspace. It can become more immersive by incorporating haptic modality. Using commonly available depth sensing controllers like Microsoft Kinect, information about the depth of a scene can be captured in real-time together with the video. In this paper, we present a method for real-time haptic interaction with videos containing depth data. Forces are computed based on the depth information. Spatial and temporal filtering of the depth stream is used to provide stability of force feedback delivered to the haptic device. Fast collision detection ensures the proposed approach to be used in real-time. We present an analysis of various factors that affect algorithm performance. The usefulness of the approach is illustrated by highlighting possible application scenarios.


cyberworlds | 2013

Image-Driven Haptic Rendering in Virtual Environments

Shahzad Rasool; Alexei Sourin

Haptic interaction significantly augments our experience with a computer and in cyber worlds in particular. However, haptic interaction requires the content creators to make physical or haptic models of the virtual objects while it is not always possible or feasible, especially when it comes to using real images or videos as elements of interaction. We, therefore, propose to use image-driven haptic rendering where a displayed image, real or simulated, is used as a source of the force-feedback calculations at any pixel touched by the haptic device. We introduce the main idea and describe how it is implemented as a core algorithm for image-driven haptic rendering, as well as for a few particular cases of haptic rendering of different dominant colors, textures and contours of the objects displayed in the images. Implementations of the proposed method to desktop tangible image application and haptic video communication on the web are presented as a proof of concept.


virtual reality software and technology | 2013

Towards hand-eye coordination training in virtual knee arthroscopy

Shahzad Rasool; Alexei Sourin; Pingjun Xia; Bin Weng; Fareed Kagda

Minimally invasive arthroscopic surgery has replaced the common orthopaedic surgery procedures on joints. However it demands from surgeons to acquire very different motor-skills for using special miniature pencil-like instruments and cameras inserted through little incisions on the body while observing the surgical field on a video monitor. Training in virtual reality is becoming an alternative to traditional surgical training based on either real patients or increasingly difficult to procure cadavers. In this paper we propose solutions for simulation in virtual environments a few basic arthroscopic procedures including incision of the arthroscopic camera, positioning of the instrument in front of it, as well as using scissors and graspers. Our approach is based on both full 3D simulation and haptic interaction as well as image-based visualization and haptic interaction.


systems, man and cybernetics | 2016

Assessing haptic video interaction with neurocognitive tools

Shahzad Rasool; Xiyuan Hou; Yisi Liu; Alexei Sourin; Olga Sourina

Haptic interaction is a form of a user-computer interaction where physical forces are delivered to the user via vibrations, displacements and rotations of special haptic devices. When quality of the experience of the haptic interaction is assessed, mostly subjective tests using various questionnaires are performed. We proposed novel neurocognitive tools for assessing both overall experience of the haptic interaction, as well as particular time-stamped activities. Our assessment tools are based on recognition of emotions and stress obtained from Electroencephalograms (EEG). We used them in a feasibility study on adding haptic interaction to Skype video conversation.


cyberworlds | 2015

Haptic Interaction with Video Streams Containing Depth Data

Shahzad Rasool; Alexei Sourin

Video interaction is a common way of communication in cyberspace. It can become more immersive by incorporating hap tic modality. Using commonly available depth sensing controllers like Microsoft Kinect, information about the depth of a scene can be captured in real-time together with the video. In this paper, we present a method for real-time hap tic interaction with videos containing depth data. Forces are computed based on the depth information. Spatial and temporal filtering of the depth stream is used to provide stability of force feedback delivered to the hap tic device. Fast collision detection ensures the proposed approach to be used in real-time. The usefulness of the approach is illustrated by a tangible video application example.

Collaboration


Dive into the Shahzad Rasool's collaboration.

Top Co-Authors

Avatar

Alexei Sourin

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olga Sourina

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Xiyuan Hou

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Yisi Liu

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Vladimir Pestrikov

Moscow Institute of Physics and Technology

View shared research outputs
Top Co-Authors

Avatar

Bin Weng

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Henry Johan

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Kan Chen

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Pingjun Xia

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge