Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Suraj Raghuraman is active.

Publication


Featured researches published by Suraj Raghuraman.


acm multimedia | 2013

A 3D tele-immersion streaming approach using skeleton-based prediction

Suraj Raghuraman; Karthik Venkatraman; Zhanyu Wang; Balakrishnan Prabhakaran; Xiaohu Guo

3D collaborative Tele-Immersive environments allow reconstruction of real world 3D scenes in the virtual world across multiple physical locations. This kind of reconstruction results in a lot of 3D data being transmitted over the internet in real time. The current systems allow for transmission at low frame rates due to the large volume of data and network bandwidth restrictions. In this paper we propose a prediction based approach that generates future frames by animating the live model based on few skeleton points. By doing so the magnitude of data transmitted is reduced to few hundred bytes. The prediction errors are corrected when an entire frame is received. This approach allows minimal amounts (few bytes) of data to be transmitted per frame, thus allowing for high frame rates and still maintain an acceptable visual quality of reconstruction at the receiver side.


international conference on multimedia and expo | 2015

Evaluating the efficacy of RGB-D cameras for surveillance

Suraj Raghuraman; Kanchan Bahirat; Balakrishnan Prabhakaran

RGB-D cameras have enabled real-time 3D video processing for numerous computer vision applications, especially for surveillance type applications. In this paper, we first present a real-time anti-forensic 3D object stream manipulation framework to capture and manipulate live RBG-D data streams to create realistic images/videos showing individuals performing activities they did not actually do. The framework uses computer vision and graphics methods to render photorealistic animations of live mesh models captured using the camera. Next, we conducted a visual inspection of the manipulated RGB-D streams (just like security personnel would do) by users who are computer vision and graphics scientists. The study shows that it was significantly difficult to distinguish between the real or reconstructed rendering of such 3D video sequences, thus clearly showing the potential security risk involved. Finally, we investigate the efficacy of forensic approaches for detecting such manipulations.


international conference on multimedia and expo | 2013

Describing a view alignment framework in 3D Tele-immersion systems

Karthik Venkatraman; Suraj Raghuraman; Balakrishnan Prabhakaran

A collaborative 3D Tele-immersion (3DTI) system consists of multiple sensor devices sending streams of data to and from each other. We consider the data streams produced from Microsoft Kinect cameras here. A single camera produces large volumes of data every second. A large bandwidth and low network delay is required to support the streaming of large amounts of data in near real time in order to provide a good quality of experience (QoE), High network delay over the Internet causes a view disparity between the multiple sites and an inter-stream disparity between two streams from the same site. Both these issues hinder the QoE. We address these issues here and provide our solutions for it. We touch upon an interpolation based scheme which uses redundant data from the Kinect sensors to only send minimal amounts of data every second. We introduce a framework to put this interpolation scheme to effective use and to measure and minimize the view and inter-stream disparities with multiple sites using a time vector based solution.


acm multimedia | 2012

Immersive multiplayer tennis with microsoft kinect and body sensor networks

Suraj Raghuraman; Karthik Venkatraman; Zhanyu Wang; Jian Wu; Jacob Clements; Reza Lotfian; Balakrishnan Prabhakaran; Xiaohu Guo; Roozbeh Jafari; Klara Nahrstedt

We present an immersive gaming demonstration using the minimum amount of wearable sensors. The game demonstrated is two-player tennis. We combine a virtual environment with real 3D representations of physical objects like the players and the tennis racquet (if available). The main objective of the game is to provide as real an experience of tennis as possible, while also being as less intrusive as possible. The game is played across a network, and this opens the possibility of two remote players playing a game together on a single virtual tennis pitch. The Microsoft Kinect sensors are used to obtain a 3D point cloud and a skeletal map representation of the player. This 3D point cloud is mapped on to the virtual tennis pitch. We also use a wireless wearable Attitude and Heading Reference System (AHRS) mote, which is strapped onto the wrist of the players. This mote gives us precise information about the movement (swing, rotation etc.) of the playing arm. This information along with the skeletal map is used to implement the physics of the game. Using this game we demonstrate our solutions for simultaneous data acquisition, 3D point-cloud mapping in a virtual space, use of the Kinect and AHRS sensors to calibrate real and virtual objects and for interaction of virtual objects with a 3D point cloud.


virtual reality software and technology | 2015

Distortion score based pose selection for 3D tele-immersion

Suraj Raghuraman; Balakrishnan Prabhakaran

3D Tele-Immersion (3DTI) systems capture and transmit large volumes of data per frame to enable virtual world interaction between geographically distributed people. Large delays/latencies introduced during the transmission of these large volumes of data can lead to poor quality of experience of the 3DTI systems. Such poor experiences can possibly be overcome by animating the previously received mesh using the current skeletal data (that is very small in size and hence experiences much lower communication delays). However, using just the previously transmitted mesh for animation is not ideal and could render inconsistent results. In this paper, we present a DIstortion Score based Pose SElection (DISPOSE) approach to render the person by using an appropriate mesh for a given pose. Unlike pose space animation methods that require manual or offline time consuming pose set creation, our distortion score based scheme can choose the mesh to be transmitted and update the pose set accordingly. DISPOSE works with partial meshes and does not require dense registration enabling real time pose space creation. With DISPOSE incorporated into 3DTI, the latency for rendering the mesh on the receiving side is limited by only the transmission delay of the skeletal data (which is only around 250 bytes). Our evaluations show the effectiveness of DISPOSE for generating good quality online animation faster than real time.


international symposium on multimedia | 2015

Network Adaptive Textured Mesh Generation for Collaborative 3D Tele-Immersion

Kevin Desai; Kanchan Bahirat; Suraj Raghuraman; Balakrishnan Prabhakaran

3D Tele-Immersion (3DTI) has emerged as an efficient environment for virtual interactions and collaborations in a variety of fields like rehabilitation, education, gaming, etc. In 3DTI, geographically distributed users are captured using multiple cameras and immersed in a single virtual environment. The quality of experience depends on the available network bandwidth, quality of the 3D model generated and the time taken for rendering. In a collaborative environment, achieving high quality, high frame rate rendering by transmitting data to multiple sites having different bandwidth is challenging. In this paper we introduce a network adaptive textured mesh generation scheme to transmit varying quality data based on the available bandwidth. To reduce the volume of information transmitted, a visual quality based vertex selection approach is used to generate a sparse representation of the user. This sparse representation is then transmitted to the receiver side where a sweep-line based technique is used to generate a 3D mesh of the user. High visual quality is maintained by transmitting a high resolution texture image compressed using a lossy compression algorithm. In our studies users were unable to notice visual quality variations of the rendered 3D model even at 90% compression.


international symposium on multimedia | 2014

Quantifying and Improving User Quality of Experience in Immersive Tele-Rehabilitation

Karthik Venkatraman; Suraj Raghuraman; Yuan Tian; Balakrishnan Prabhakaran; Klara Nahrstedt; Thiru M. Annaswamy

3D Tele-Immersion (3DTI) environments are emerging as a new medium for human interactions and collaborations in the areas of education, sports training, physical medicine and rehabilitation. By adding a tactile element to a visually centered 3DTI environment, such applications can be made even more engaging. But it also opens up a few challenges in terms of fusing the visual and tactile data streams in a synchronous way. In this paper we describe a 3DTI Tele-Rehabilitation system with Microsoft Kinect cameras and hap tic devices. We describe some of the challenges we face in providing as well as quantifying a good quality of experience (QoE) in this system. We propose a set of solutions that: (i) improve the users QoE (by using multi-modal prediction for handling latencies, better synchronization that accounts for the global state of the system, etc.), (ii) quantify the QoE (by designing a controlled virtual environment and by defining appropriate user QoE metrics for immersive tele-rehabilitation). The experimental results show a marked improvement in the performance of the system, consequently improving the user-experience. This is also verified by the results of the user performance study.


acm multimedia | 2017

H-TIME: Haptic-enabled Tele-Immersive Musculoskeletal Examination

Yuan Tian; Suraj Raghuraman; Thiru M. Annaswamy; Aleksander Borresen; Klara Nahrstedt; Balakrishnan Prabhakaran

The current state-of-the-art tele-medicine applications only allow audiovisual communication between a doctor and the patient, necessitating a clinician to physically examine the patient. The doctor relies on the physical examination performed by the clinician, along with the audiovisual dialogue with the patient. In this paper, a Haptic-enabled Tele-Immersive Musculoskeletal Examination (H-TIME) system is introduced, that allows doctors to physically examine musculoskeletal conditions of the patients remotely, by looking at the 3D reconstructed model of the patient in the virtual world, and physically feeling the patients range of mobility using a haptic device. The proposed bidirectional haptic rendering in H-TIME can allow the doctor to evaluate a patient who suffers from problems in their upper extremities, such as the shoulder, elbow, wrist, etc., and evaluate them remotely. Real world user study was performed, between the doctors and the patients, and it highlighted the potential of the proposed system. The study indicated a high degree of correlation between the in-person and H-TIME evaluations of the patient. Both the doctors and patients involved in the study, felt that the system could potentially replace in-person consultations, someday.


information processing in sensor networks | 2014

Demonstration abstract: upper body motion capture system using inertial sensors

Jian Wu; Zhanyu Wang; Suraj Raghuraman; Balakrishnan Prabhakaran; Roozbeh Jafari

Motion capture plays an important role in interactive gaming, animation, film industry and navigation. The existing camera-based motion capture studios are expensive and require a clear line of sight; hence they cannot be applied to ubiquitous applications. With the rapid development of low-cost MEMS sensors and sensor fusion techniques, the inertial sensor based motion capture systems are attracting a lot of attention because of the seamless deployment, low system cost and the comparable accuracy they provide. In this paper, we demonstrate a wireless real-time inertial motion capture system.


acm sigmm conference on multimedia systems | 2015

MMT+AVR: enabling collaboration in augmented virtuality/reality using ISO's MPEG media transport

Karthik Venkatraman; Yuan Tian; Suraj Raghuraman; Balakrishnan Prabhakaran; Nhut Nguyen

Augmented Reality (AR) and Augmented Virtuality (AV) systems have been used in various fields such as entertainment, broadcasting, gaming [1], etc. Collaborative AR or AV (CAR/CAV) systems are a special kind of such system in which the interaction happens through the exchange of multi-modal data between multiple users/sites. Multiple sensors capture the real objects and enable interaction with shared virtual objects in a customizable virtual environment. Haptic devices can be added to introduce force feedback when the virtual objects are manipulated. These applications are demanding in terms of network resources to support low latency media delivery and media source switching similar to broadcast applications. Enabling real time interaction with multiple modalities with high volume data requires an advanced media transport protocol that supports low latency media delivery and fast media source (channel) switching. To enable such collaboration over a stochastic network like the Internet requires a combination of technologies from data design, synchronization to real time media delivery. MPEG Media Transport (MMT) [ISO/IEC 23008-1] is a new standard suite of protocols designed to work with demanding, real-time interactive multimedia applications, typically in the context of one-to-one and one-to-many communication. In this paper, we identify the augmentations that are required for the many-to-many nature of CAR/CAV applications and propose MMT+AVR as a middle ware solution for use in CAV applications. Through an example CAV application implemented on top of MMT+AVR, we show how it provides efficient support for developing CAV applications with ease.

Collaboration


Dive into the Suraj Raghuraman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Karthik Venkatraman

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Kanchan Bahirat

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Kevin Desai

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Yuan Tian

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Xiaohu Guo

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Zhanyu Wang

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Jian Wu

University of Texas at Dallas

View shared research outputs
Top Co-Authors

Avatar

Thiru M. Annaswamy

University of Texas Southwestern Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge